Does Blender the animation render frame by frame from scratch?

Hi, I’m very new using Blender and I have no idea how the render in 3D programs works.
If there is no difference (or few differences) in two consecutive frames of an animation, Is each frame built (render) from scratch?. Well, I’ve experimented with two frames in Cycles and each frame is built from scratch and the duration of the process is the same for each frame (the whole process takes exactly twice that if I render a single frame).
So I want to know if there is any way for Blender can detect pixels that is not change between two consecutive frames to not repeat the work in those areas. Although I suppose if there was a method for that, should analyze the time it takes to see if it has any advantage.

(sorry for my English if I’ve made mistakes)

If there is no difference (or few differences) in two consecutive frames of an animation, Is each frame built (render) from scratch?

Yes.

So I want to know if there is any way for Blender can detect pixels that is not change between two consecutive frames to not repeat the work in those areas.

No. Some research for this (reprojection) exists, but there’s no robust and general method.

Obviously you have to render the image fist to compare the rendered pixels with the previous rendered image. I guess you see the paradox?

While it is possible to implement the detection if nothing changes, there is no such thing (in Blender).
It’s up to the user.
If I render an animation and need a longer still i make it exactly 1 frame.
As you should always render to an image sequence with a lossless compression, you lateron simply use this single frame multiple times in the video editor of your choice.

Now I see that I perhaps I was wrong to have spoken of pixels. I was not talking about comparing final images, but about comparing procedures or code or whatever that Blender uses to build each frame.

It’s usually much cheaper to render just a visibility/motion vector pass than rendering a complete frame with lighting etc. From this data, you can determine pixels to be re-used from the previous frame. Of course, any motion technically invalidates all light transport captured in the previous frame, but this might not be perceivable (most video codecs rely on this fact).