3D, compositing, videoediting workflow confusion

I need help understanding the workflow between the individual parts of Blender like 3D, compositing, videoediting and stuff. I have some trouble understanding the concept where this is all intertwined. So far I haven’t found learning resources that explain how the workflow works.
To prevent missunderstandings, it’s not my concern how compositing/node-based-compositing works, or how videoediting works.
My problem is much rather of that nature that there is one rendering section, with all the different rendersetting, that is responsible for all different parts of blender, at least I guess it’s like that. Somehow the “Render”- and the “Animation”-button are responsible for rendering the 3D view, the compositor, the videoediting and maybe even more which I’m not even aware of, right now.
How is this workflow designed?
What is the hierarchy in which the different parts get rendered?..is there actually a hierarchy?
When I hit render, it primarily renders my 3D scene?..this is then automatically showing up in the renderlayers node?..which will be processed in the compositing editor?..and is automatically exported from there…or do you have to export that seperately???
Sorry, I’m very confused by that workflow. Can You help me out on that, directly or with references to appropriate learning material.

Thanks in advance.

You can have:
Render Scene --> File output
Render Scene --> Compositor --> File Output
Render Scene --> Compositor --> Video Sequencer --> File Output
Render Scene --> Video Sequencer --> File Output

You can also use an image/movie in place of the rendered scene as the input to the compositor or sequencer, or a combination of rendered viewport and image/movie file.

Simplified the compositor and sequencer are generally used for processing your scene in some way post rendering.

Thanks for your answer.
So as of your examples, the way understand it is that the 3D scene gets rendered always, no matter what, right? So it is not possible to have
Compositor --> File output
How do I setup which of your four examples Blender should use?
Is it like, as soon as there is a clip in the video editor, Blender will render that clip after rendering the scene?
Since there is only one set of rendersettings responsible for the output, what if I want my intermediate output from the 3d rendering be an uncompresed .exr but having my final output from the compositor be a .jpg or so?
And where in that chain appears the image editor?

In the render tab under post processing you can enable/ disable compositor and vse.
As long as they are empty blender will render the camera view of your 3d-scene. When comp or vse have items in it, those will be rendered. You can have a completly empty 3d scene, but load a image/ video in compositor or vse and render that.
Image editor is where you do youre UV/ texturing work.
Hope this makes sense to you…

3D scene is rendered only when compositor has a RenderLayer node. If there is not, it acts as ordinary 2d compositor: you can read images, videos etc and comp them without messing with 3d scene at all.

Oh,that’s a nice tip, didn’t know about that.
So the workflow, as I understand it is that I have to create a scene, render it once, just so that I can see it in the compositor, do my compositing and then render everything again, this time with the compositor?

Correct. And thats how you can achive your different format outputs.
While there is also the option to render just once. See kesonmis post

Ok, thanks all, that made things a lot clearer to me.

Been a while since I used Blender but the render order is likely still the same. You cannot render without a camera in the scene so this causes 3D viewport to be the first priority of the render order. Next in line is the VSE if enabled. If you also have the compositor enabled then it will be rendered last. Bit there is also the issue of scenes to deal with. If you have more than one scene enabled then you can import it into the compositor and it will render before the current scene (not sure if you can do this with the VSE or not because I’ve never been a fan of the sequencer).
Scenes are really where things begin to shine because you can totally manipulate the way that things get rendered by setting up a duplicate scene and only rendering parts of one and importing it into the other.
Ther3 are also other ways to manipulate things like with raytracing. If you set a rqy mirror on one object and stack several other objects into the scene but set the other objects as being not renderable…their reflections will still appear in raytraced functions while the objects themselves do not appear in the render.
It goes on and on with many other features and tricks for render order in Blender as well.
Have fun trying to figure all of this out because it can take a while to learn. All of it can be found within Blender online documentation. Some of the concepts may seem a bit or even very abstract to begin learning but that’s just because they’re very strict logic.

Recently Blender has acquired the ability to add one VSE scene into another VSE scene. When you import another scene strip look at it Scene properties. Check “Use Sequence” to access another VSE’s contents. You can also mix the audio level here, which you cannot do in meta-strips.