viewports synchronization

after wandering through the forum archive, reading several (old) related posts, I guess there’s definitely little chance to find an obvious solution to my need… but let’s try anyway :
in a word [1], is there a convenient tip, or method from within the python console, to echo Ctrl+MMB / Shift+MMB events to several viewports simultaneously. My concern is to pan and zoom in an ortho-top 3d view and obtain the same navigation effect in a distinct Camera view.

[1] the long long format post follows:
To be a bit more verbose on the context, to whom my question may be interesting: I am currently using Blender as a GIS engineer, thanks --among other solutions-- to the BlenderGIS add-on. When it comes to dealing with 3d vector data, conventional GIS solutions unfortunately offer poor interfaces, and naturally one is forced to go towards/into 3d modelers, which I naturally did some years ago! Today I’m trying to setup a 3d scene reconstructed with MicMac (an amazing photogrammetry software developed by the french geographic institute) from an aerial survey. As parametrization can be incredibly precise within Blender I had no pain setting up what could be considered a digital photogrammetric workstation (I mean I had to precisely position cameras, rotations, focal lengths, chamber dimensions…, all the so-called /orientation/ being performed previously with MicMac). It allows me to set up a scene composed of 2 cameras and their respective aerial images (“images as planes” in front of each camera), each displayed on a separate dedicated local viewport (set to camera view). The scene is viewed from a miror stereoscope, i.e. the operator can visualize genuine third dimension. It is very similar to what is seen within an analytical stereorestitutor. As the scene is also georeferenced I could incorporate a 3d pointcloud (again produced with MicMac), and by magic it works out of the box! that is, the pointcloud perfectly fits the restituted topography…
So, why did I do that? I just wished to see if Blender could do the job, and it does! But the initial goal was to be able to visually control the pointcloud quality, and digitize some extra objects which MicMac could not discriminate. Well, it works pretty fine, the only issue is an ergonomic point: in order to pan and zoom the scene I have to do it twice (once in each viewport), then re-align views in order not to loose the stereoscopic effect. It is a minor drawback compared to the provided service but if I could find a solution, then definitely we would (once again) give proof that Blender is actually a polyvalent toolbox!

Thanks for reading me guys (and girls),

oh, and for those who just need to cross their eyes to capture third dimension, here’s what it looks like:

The easiest way I can think of is to view from several cameras, parent both cameras to an empty, and then manipulate the empty to manipulate both views at once. You’ll have to un-link the vew from the scene’s active camera first, but the rest should be straight-forward.

Thank you for these clues dudecon. Un-linking the view from the scene’s active camera is what I did in each viewport, yes the option is a bit tricky to find but finally works nicely. The solution to track an empty is enticing, but in this context I can’t move my cameras setup, because their relative and absolute positions in space is the fundamental condition to reproduce the /real/ 3d metrics. Or else I could consider moving everything as a block (cameras, images, 3d model) along my session, then at the end restore everything to the initial positions… mhh, dodgy!:smiley:

Thx again !