The thing that excites me about nodes, as a generalized concept, is how it ties-together pieces of (Python and/or C) source-code … visually. In particular, it describes the flow of information and events in such a way that a “generic node-runner” layer of software within Blender can dispatch events using it. (The nodes themselves don’t have to specifically be concerned about how-when-and-why they will be invoked by the system, nor exactly where the information comes from or where it goes. Meanwhile, the dispatcher doesn’t have to specifically be concerned with what each node is doing.)
This reminds me very much of the “RAD = Rapid Application Development” systems which very quickly appeared after the Macintosh and Windows “GUI” systems first appeared. (Koff, koff … “these ‘kids’ today” …) There were many successful experiments, not only in constructing the user interface elements (forms, reports, and such) pure-visually, but also with describing the underlying logic, particularly “visual [SQL] database-query builders.” Many developers today, in those areas, couldn’t imagine not doing things this way.
These RAD systems work by pre-supposing a layer of “generic” software foundations, which can then be “customized” to build a unique deliverable – quickly. Well then, in an easy but not-exact analogy, “so does ‘nodes.’” That’s what makes the principle so powerful, and universal.
And so it was that a [very experienced] developer was, in the OP video, able to create new features and to get them working within the Blender environment “in a matter of a few days.” The developer could concentrate on implementing one node-type at a time, using existing infrastructure to link them together, to create a new feature which could be deployed as a plug-in and subsequently installed by any Blender user. Blender exposes the same metaphors for anyone to use in this way. That’s huge.