So back when I used Premiere (and that was a VERY long time ago), it had the step of RENDERING - ie. you click Render and it makes the final output video. I was wondering if that had changed since then? …the problem as I understand it is, say if there are 2 people talking, with camera 1 focused on Guy 1 and camera 2 focused on Guy 2, then the final movie has to have the video of whoever’s talking AT THE MOMENT, ie. the video must flick between the 2 of them. Now, what’s needed is: start playing -camera stream1- (when guy 1 is talking), when it switches to Guy 2, the director/editor presses “2” or whatever, and the editor now has to stop reading from video file 1, and switch to video file 2, which is what is being outputted into the final movie!! Surely TODAY’S hardware (modern SSDs) must be capable of doing this WITHOUT RENDERING?? So - is there software which is capable too, these days? (indeed, has Premiere itself changed? - I wouldn’t know!). Out of all the many FOSS video editors there are, there must be one or a couple which do this, right?
What you are looking for is as far as hardware is concerned is a real time hardware solution. It is not about can you edit in real time or not. You can configure a system to edit in real time with hdd and a good graphics card. These days, the main reason to render is to commit your edits to a video stream for use in various media steaming solutions such as Youtube. Or to add to a real time live stream presentation.
The next level up from that is real time streaming of an edit at brodcast standards in HD. That requires specific hardware. Your NLE can play it back, but to do so flawlessly as a broadcast stream it has to be played through hardware designed for this.
So there are two options here. 1) a beefed up system to handle a certain amount of editing with relatively unlimited tracks and fx with limits in “real time”; as it gets more complex it will simply start to drop frames. 2) A hardware system with certain track limitations that can play in broadcast guaranteed as a stream with no dropped frames, limited to real time effects and number of tracks that the hardware can handle.
This has been the same for a few decades. What has changed is the complexity possible in both cases.
use 1 scene to edit videos another to host the 3D scene import the strip with a multicam modifier and keyframe camshift
No that isn’t what I was talking about, I meant a -video-. By “good gfx card”, you mean…?
You just said a bunch of stuff I don’t understand What “3d scene”?
Go to vse add scene
This will only work if you can add more thsb one scene . Abd be on that other scene
Everything is video now. Even films in theaters are technically video. Broadcast standard relates to video as it is broadcast in real time. Meaning, TV Cable, or steaming services.
The two types of video I mentioned are the only two available for computer video editing. a) offline editing in “real time”. Vegas Pro, Premiere, Media Composer, Final Cut etc. b) online or basically broadcast standard which is hardware based real real time. Networks and streaming services as well as digital projectors use b). Also you can buy video editing hardware such as Black Magic or Avid that allows you to edit and/or switch programing live, broadcast standard with no dropped frames and is considered the same as b). Some people edit video with this true real time method.
This is the context of my above message.
Offline type is the type that is usually rendered. The rendered result can be streamed in real time or broadcast over a television network, and also rendered to a digital theater projector standard and projected in real time.
Online hardware based solutions can also use cameras in real time. For example sports or live TV. And requires the same kind of a hardware switcher/editor/compositor.