Time IPO for a vid texture???

I’m not sure if this should go here or in texturing but I think the only work around is with nodes:

I thought that I was being clever-> I made a vid of my eye and mouth movements and converted that to an alpha’d greyscale to project onto an object. I then planned to use a texture time IPO to animate facial movements by simply changing the time to the frames I wanted to use to match the script and audio.

I’m so clever that I didn’t realize that there is no time IPO for textures or materials!

I can’t figure out how to make the texture nodes time node change the frame of the video texture.

I can’t use the time IPO on the object because I want to be able to animate its movements seperately.

Maybe you want Holly’s script?

http://blenderartists.org/forum/showthread.php?t=155800

Thanks Atom. The good and bad news is that it is exactly what I want! Bad in that, gulp, its certainly beyond my realm of experience; good time to learn I guess. All this just to mimic a time IPO for textures, sigh…

I have heard that you will be able to animate anything in version 2.5. Do you think that will include a time IPO for vid textures?

On closer examination, this is not what I want. Her node seems to just let you pick which of 10 textures to display over any given frames. Its a multiplexer for static images.

I want something that really works like a time IPO: I want to select which frames of an animation are used as textures for a given material at any given time during the render. Its pretty frustrating to see so many folks looking for this feature, and Holly working so hard to assemble a work around. What we really need seems to be already incorporated into other parts of blender, just not textures.

To make a video material time ipo::

Can anyone think of a way to use a time texture node input to select which frames of an animation to use for the texture??? What I want to do is use frame number of the render to select the frame number of the animation (in a flexible and controllable way) and then apply that frame as a texture in the selected material for that render frame.

Why don’t you just pre-render what you need in the order that you need it? Sometimes, using Blender, you have to out-smart it to make it do what you want.

The only other solution I can think of is that you can write custom python code (like Holly did) to load the frame you want from an image sequence when you want it. Then apply that static image to the object in your animation that needs that frame. Your code would go in the frameChanged event of a scriptlink.

Been looking into this a bit since it does seem to have some very useful possibilities, but I also see a few limitations the OP might not have anticipated.

First, since it’s not a “native” Blender function, it would have to be scripted, not a huge problem but probably not one that will involve nodes, since afaik PyNodes are a ways off(?)

Looking at the API for 2.48a (most current), there are two instance variables for the Image object that might help in this regard – animOffset and animStart, which I assume correspond to Offs: and StartFr: in the Image>Movie UI panel (if not I’m sure there is a variable that does – needs testing). Using Offs: & StartFr: in the UI, a movie can be set to play at any point in the Timeline starting at any frame in the movie, which seems to be the OP’s main goal.

However, these variables aren’t animatable via the UI, so some other means will need to be found, such as use of a proxy object in the Scene that will allow animation by “normal” means and return suitable values a script can use to set the movie Texture’s parameters. Example: An Empty (call it “MovieProxy”) is animated in two dimensions, X & Z, and X represents the value to plug into the Offs: variable while Z represents the value that the variable StartFr: uses. Conversion of numerical types will need to be done to match variable types (e.g., Float to Integer). The user can basically ignore the Empty in the Scene and use only its associated IPO curves (probably in Constant mode) to do the dirty work.

However, there’s a cloud attached to this silver lining in that at any one time during a sequence there may be a need to have different parameters in use simultaneously. If only one MovieProxy is used (and only one can be used per Texture/Material as I currently see it) this couldn’t be done, as the IPO’s cannot return two values at once from any one curve. The need for simultaneous values arises if you visualize what’s happening: From a movie that is a continuous series of images, say the first of two frame ranges uses MovieProxy frames 10 - 30. Immediately following that the same MovieProxy jumps to use frames 60-90 of the movie Texture. The abrupt transition from frame 30 to 60 of the image texture will be quite visible unless the movie is produced very carefully to prevent such “jump cuts” between portions to be used, a nearly impossible task, I’d think.

So, it seems likely that at least two essentially identical Materials using the same movie Texture would be required, and in addition to the MovieProxy modulations of Offs: and StartFr: there would have to be transitions in the Materials’ Alpha channels to basically do very fast cross-fades of the time-disjointed sections of the Movie Texture over one another as a way of disguising the jumps from one section of the movie Texture to another. Sort of a “A-B roll” setup, if you’re familiar with that editing terminology.

Does this sound like too much work for the resultant effect? Good question.

So far this all just a “thought experiment” based on a couple hours playing around with movie textures and digging in the API. No doubt there are problems I haven’t anticipated, but then again it may turn out to be easier than I have described if some genius scripter can shoot some holes in my reasoning.

It also sounds like an ideal task for the VSE – small segments of a movie placed in random time order with user-controlled transitions between – if only one could pipe the VSE output into a Material so the results could be previewed before rendering out the VSE content.

The Materials Nodes Editor’s lack of Time and Image input nodes such as those in the Compositor prevents using it for a similar approach.

A nodal approach, even if an Input>Time node existed for Materials, is also not practical because the Time nodes are so much less flexible than a true IPO curve; curves in Time nodes can be created but making them work with any precision is a mind-numbing task (I’ve tried a few instances of this) due to lack of accurate scales and restrictions in node workspace. The IPO Editor is made for this kind of thing. As long as I’m pipe-dreaming, how about a node for the Node Editor (Material and Compositor) that takes values from an IPO, like a super-accurate Time Node?

That idea of prerendering the facial animation certainly seems like the most practical Atom, and probably what I’ll do. THe whole purpose of this was to make an easy way to make quick talking head animations, seems to have become a bit of a Medusa.

Thanks for the deep thought experiment chipmasque, I think that the crossfade could be done, with significant pain, if one used 2 texture slots at all times alternating which one you were transitioning into -> you could do an alpha mix, hmm this could even have decent defaults too. You would have to constantly check for frame number changes of more than say 2 frames as it would be nice to change playback frame rate, eg to make eyes blink fast or slow.

It certainly sounds like a time ipo would be nice but as you pointed out, there would be no nice smooth transitions - requiring pretty tight editing of the vid texture.