Basically because the modern GPU pipeline is designed around only modifying data using shaders. Shaders have their limitations, so Blender, which was designed over 20 years ago, doesn’t use them to modify geometry. It has to modify the model on the CPU in system RAM, recalculate any and all modifiers, and upload it to the GPU every frame when something changes.
Note I’m not saying Blender’s edit mode can’t be made faster, just that DCC apps in general are a poor fit for the modern GPU pipeline, so naive implementations will tend to perform poorly. Speeding it all up will require some smarts and trickery and will probably make maintenance harder, so I’m not surprised they’re putting it off until they have a better picture of what the final feature set is, and where the bottlenecks are.