Is it possible to use VSE output as feed for a UV texture?

Hello everyone!
This is my first post here at and I am really new to blender. I am evaluating blender right now to find out if it might be the go-to tool for a project that I have coming up.
One of the features that brought me to blender is the great variation of content creation tools it unites under its hood.

So I wonder if it would be possible to use the output of the VideoSequenceEditor as a feed for a UV texture on a mesh?
I searched a good while in the manuals, the internet in general and also here on the forums but haven´t really found a definite info how to achieve such thing.

What I am after: In 3D I have a set of six screens mounted to sliders on a wall. Each of these screens will have to have their own distinct feed (a video sequence). My aim is to ideally be able to do basic editing (simple cutting/cross dissolving/adding text) in VSE and be able to see the effect on my uv mapped screens in the 3D viewport. That way to be able to sync VSE “feed” and the animation of my screens/sliders (pseudo-) interactively.

Would something like that be possible?!?


No. 3D scene is for processing, compositor and vse are post-processing. The order is 3d scene -> compositor -> vse

Hmm, yes, I know that is how it generally works.
But I was thinking to somehow combining the “departments” - a bit like was attempted with the “VSE Compositor Node Proxy” that feeds the Video sequence directly into Compositor and have them sort of interact. But in this case not to feed it into a compositor node rather into a Texture Node.

There must be a way, no?

Well… it’s right, that vse is normally the final step in the prod-line. And therfore Your atempt is a bit out of rules, to say it in this way.

I don’t have a solution, but from a pure technical point of view, maybe the following leads you to a solution:

  1. A “projection” on a UV-Map with a vidoe-sequence is just easy like any other picture projection on an UV.
  2. In order to “see” this, you need the animation of changing any single picture of the vid-sequence on the time-line for the frames as animation.
  3. to finalize, you have to render out the combination of sequence and mesh.
    Well so far…

In order to fullfill something like you want during to processing, I guess it would be necesaary to do two sequencial (nearby same-time) renders: a) the render of the frame of the sequence, when changed and b) (2nd render) the viewport render of the projection of the outcome of the first render.

Only as a rought, not finalized thought: In principal pushing the outcome of the Video-render into the input of the UV-animation.

BUT HOW!? No real idea. :slight_smile: But maybe it might help, cause a direct VSE->UV is (as mentioned above, exactly) not possible, due to the fact, that the VSE itself has no render-output other than a video-file. Something like streaming video could be aother hint. But also here… no glance.

P.S.: Maybe this could be a “goal” for the python-Gurus; namely sychronizing the animation-time-line with the VSE timeline…

Thanks, Mike. Yes; i was kinda was afraid already that there is no easy go-to-solution out of the box…
Well, I was hoping, though, as why have eveyrthing under one roof when it cannot talk to eachother back and forth… :-
Yeah, maybe I need to convince some “python-Guru” to see what might be possible… :-]

Well sometimes this “under one roof” is not a garantie for combinationability… There is always something, that, by nature has to be first… Thus it’s tricky to discuss the question: What has to be first the chicken or the egg? :slight_smile:

I´d vote for the egg :wink:
Anyway, looks like I need someone who is proficient in python to solve my problem…