3 point editing in the VSE - the basic missing functions

How much can be done in python as add-ons and how much of this needs to be added to Blender?


Video Player:
Add option to select a source mode in the video player, so the options for the Video Player Window would be:

  • Play sequencer(as it is)
  • Play source(new)

Source Video Player:
When the Video Player is in the new Source Player Mode add functions to:

  • Drag and drop camera from outliner into Source player.
  • Open camera from outliner into Source player(double-click).
  • Drag and drop from File Browser into Source player.
  • Open file from File Browser into Source player(double-click).

Timeline Window Options:

  • Add option to link Timeline Window controls to the Source Video Player, so the options for the Timeline Window would be:
  • Control Sequencer Video Player(as it is).
  • Control Source Video Player(new).

Timeline Window Functions - when linked to Source Video Player:

  • Set source inpoint
  • Set source outpoint

Timeline Window Functions - when linked to Sequencer Video Player(and Source Video Player is enabled):

  • Set sequencer inpoint.
  • Set sequencer outpoint.
  • Function/button to insert Source in/out into Sequencer between Sequencer Timeline Window in/out (ripple moves the following strips to the right).
  • Function/button to use Source in/out for overwriting Sequencer between Sequencer Timeline Window in/out.
  • Function/button to lift sequencer strips between strip/strips in selected channels
  • Function/button to remove(ripple to close the gap) strips between strip/strips in selected channels.

Sequencer:

  • Select channel/channels for operations in sequencer window and highlight it/them.

I know that there are several add-ons which attempts to workaround the current limitations and I know about this thread: https://blenderartists.org/forum/showthread.php?299319-Help-redesign-the-Blender-Video-Sequence-Editor!
But the question for me is if we keep the list sparse, how much do we need devs to code into Blender and how much of this can be done by python add-ons?

EDIT: After further investigations I’ve narrowed it down to this:

It seems to me that we only need in the Timeline View options to select what sequence(in the current scene) to Lock Time to.

And in the Video Sequence Editor a scene selector would have to be added to the toolbar, so a scene outside the current scene could be linked to/controlled. Ex. when opening a new video-file, a new scene will be added containing that strip only, and then the new scene will be automatically selected in the Scene Selector of the “source” player. This way the “source” player can play the sequence of the new clip/strip and in and out points can be found, before inserting the trimmed clip in the sequence of the current scene.

It could look something like this:
[ATTACH=CONFIG]479731[/ATTACH]

Which leads me to the questions about what is and what isn’t currently possible with the python API:

  • So is it possible to let the Timeline Lock Time to Sequencer nr. #?
  • Can a video file be added to a hidden scene’s sequencer?
  • Can the Scene Selector combobox be added to the Video Sequencer Toolbar?
  • Can the Video Sequencer be setup to control/lock to a sequencer in a different scene?

Even a sparse list, as good intentioned as it is, suffers from one issue. You want to make the VSE an NLE. Ton has said that it is not his intention and that any more efforts in this direction would be a waste of effort.

Once you introduce a source media metaphor then you need to include the track patching abstraction. That is, how you direct the source to land in the timeline. Once this occurs you decide how to ripple edit based on rules.

It all becomes fiddly quite quickly. But of course I agree with you, and its the reason for my handle (3pointedit) for all of these years :wink:

One thing that is important though, and could be the most important, is the ability to attach metadata to source media. For example recording multiple in/outpoints and possibly including keywords with them. Think of them like the markers in the VSE timeline but applied to assets instead of scene data.
It would be useful even to animation source media so that you could append iteration information during a project and potentially even directors notes.
Have you seen the terrific addon http://easy-logging.net that tries to achieve this sort of value add.

The main question is how much effort is needed from the devs and how much can be done by add-on coders?

My guess is that adding a

  • Source Video Player Window,
  • with the ability to open camera/files(not in the sequencer),
  • linked to a Timeline Window control,
  • with in and out points marked visually,
  • and the ability to visually select a channel in the sequencer(to show where the operation would take place)
    would be the only parts which would have to be added to the Blender core(w. API for python). And the rest of the functionality could be coded in python as add-ons?

I don’t know the limitations of VSE python implementation, so that’s the reason for my question. Stuff like logging, as you mention can be handled in python, I guess.

Looking at this video from the Easy-Logging 2.0: https://vimeo.com/103623427 It seems to me that the main missing functionality is to allow the Video Player, the Timeline and the Sequencer to link to a different scene, than the currently visible, in order to have both source video player and project sequencer video player in the same ui like the screenshot above.

The innovative Easy-Logging 2.0 project definitely gets my vote for a script to be added to 2.79 and has a lot of the functions to solve what I mention above, so it gives me faith in that a lot of what I mention in the first post is already possible through python(and also Kino-raw-tools).

So on the add-to-Blender(not solvable through python currently) wishlist is:

  • Add option to let the Video Player, the Timeline and the Sequencer to link to a different scene w. sequencer than the currently visible.
  • Load files/cameras to a different scene w. sequencer that the currently visible.
  • Visually mark In and Out in Timeline window, if range is too limiting in playback(seems to be working okay in easy-logging?).
  • Visually show selected channel/s to ex. insert to or delete from.

You can fake a player by co-opting another scene. These other scenes act as individual sources holding the metadata and allowing you to scrub the vision. But it means that you have to always switch scene/views. You can’t have source and record in the same scene unless you code it as you suggest.

Another basic issue is performance. The lack of multi-threading often leads to rough playback.

I haven’t used any current pro NLEs, but editing several 1920x1080 Prores streams (eg 4:2:2 at ~100Mbps) doesn’t seem all that unreasonable on a modern machine (monitors would have to be set to match the project frame-rate and would probably need some additional sync method other than just free-running at the same Hz).

I’ve yet to find an open source Linux NLE that will play even one Prores stream without frequent frame drops/duplicates. Maybe it’s not entirely a fault of the application but an Xorg or Linux issue…

the problem is the prores decoder from ffmpeg. on linux you would be better off using a different format like dnxhd ; anything but prores for smooth playback

I’m still trying to narrow it down to how few things would have to be hard-coded into Blender in order to get 3. point edit in the same scene(without having to change scenes).

It seems to me that we only need in the Timeline View options to select what sequence(in the current scene) to Lock Time to.

And in the Video Sequence Editor a scene selector would have to be added to the toolbar, so a scene outside the current scene could be linked to/controlled. Ex. when opening a new video-file, a new scene will be added containing that strip only, and then the new scene will be automatically selected in the Scene Selector of the “source” player. This way the “source” player can play the sequence of the new clip/strip and in and out points can be found, before inserting the trimmed clip in the sequence of the current scene.

It could look something like this:


Which leads me to the questions about what is and what isn’t currently possible with the python API:

  • So is it possible to let the Timeline Lock Time to Sequencer nr. #?
  • Can a video file be added to a hidden scene’s sequencer?
  • Can the scene selector menu be added to the Video Sequencer Toolbar?
  • Can the Video Sequencer be setup to control/lock to a sequencer in a different scene?

A new trick of the VSE scene strips is the ability to channel sound from another scene. Its effectively a mixdown of that scene, as the audio all gets mixed to the one strip. You can even fade the mixed audio (key frame its volume) in the current scene, sadly this feature is not available to Meta-strips. It would be very helpful to meta-strip video/audio together (so they stay synced) and trim that single strip as required, but you have to step into the meta-strip to change audio volume.

Sounds interesting, though it may not directly solve these challenges. Do you have a link, so I can read up on it?

Scene strip audio started here I think? https://developer.blender.org/T38500
Here is an example of the scene strip after it was implemented: https://gooseberry.blender.org/mathieus-blender-editing-tutorial/ Look at the video at 9min mark.

Any thoughts or feelings about these suggestions:

Thank you for the heads up on the Add Scene Strip function.

Here’s video on the powers of 3 point editing:

Another thought on how to add the ability to have the “source player” play and mark the source strip in the same scene as the “recorder”, could be to add the new source strip in a new channel and ex. ten hours outside the work area in the sequencer and then when inserting it into the work area with in and out points it will be moved from the temp place (+10 hours). This way the “source player” could play the source clip from that specific channel and thereby leave the “recorder player” untouched(well, not the playhead during the operation).

Would this work?

Hahaha, everything old is new again… do you mean like this? Jump to 1:02secs

Yeah, sort of. But if you in the interface have both a “source” sequence player and “record” sequence player, you could add all your source material in track 1 and link the “source sequence player” to channel 1 and the “record sequence player” linked to channel 0.

So the “source sequence player” would show the source material and “recorder sequencer player” would show “recorder sequence player”.

But in the end of the day it’s a mess having both linked to the same playhead, so the other suggestion above would come into play to have the option to link Timeline View to the “source sequence player” in order to have two individual sequence players responding to two individual playhead positions.

Yeah, you can’t have 2 playheads. You can have the ghost offset play head. Perhaps you could hack the offset so that the master stays put and the source changes the ghost offset further up the timeline?

It feels like I’m just walking in your footsteps, so I’m wondering what barebones features you think needs to be implemented to get “one scene” 3 point editing working?

Well my issue was with metadata for source media. That is retaining In/Out decisions. Easy_logger pretty much solved that but its not elegant, having to use alternate scenes and tag clouds.

I guess the problem is a source viewer, but that just is not necessary from an animation layout point of view. As far as the Foundation is concerned all you need to do is drag in a rendered strip and trim it on the timeline. Source viewers are only needed when there’s a lot of footage to go through and you want to attach meaning or alter their attributes for later use.

As designed now there simply is no allowance for Source media manipulation/storage of metadata in the UI or VSE backend. Seems like a whole lot of work to lift it for what reason when you can get Resolve for free anyway.