Hi. I have been experimenting with both technics for large oceans.
Besides tweaking some things in the settings to make the calculation of those pixels are not in the camera faster I am using Ecycles.
My experience so so far is that the mesh that has the bake maps will be insanely dense to match the ocean modifier outcome. I ended up, for now, using the ocean modifier. I am just curious about what is your workflow if you have experience on this. I understand the bake method should be way lighter for the computer, but I am wondering if the fact that the mesh needs to be super dense …and the render is loading exr files…does not make it slower in the end.
Using maps is not just a question of memory or speed.
Ocean modifier in displace mode is displacing a mesh in Z direction for generated coordinates.
If you want to displace mesh according to its UVmap, you need to use UV textures.
You need to bake maps instead.
If you want to generate and control particles from waves generated by this modifier, you need to control them through image textures or weight groups. Ocean modifier does not produce weight groups but it can bake images.
Images can be painted, modified, blended with other images.
That is a data that can be tweaked by artist.
Now, if you just want a detailed ocean, you don’t need a huge mesh in viewport.
There is a viewport resolution and a render resolution directly used by render engine.
Baked maps have a resolution based on Render Resolution.
If you use adaptive subdivision, mesh rendered will be tessellated according to camera point of view.
It should not be a lot less heavy than brute force mesh modifier.
But you will gain in speed and ram because your are skipping computations to know to what ocean, mesh should look like, at next frame.
Cost due to generation of data has already been paid. Data to use is already stored on disk.