audio/video out of sync, strange behavior

Every time I try to make a video in the video sequence editor, the audio is out of sync with the video. If the imported video contains audio data, they are out of sync (and I have changed framerate so that the two strips match). If I’m importing audio and video separately, when I move one audio strip around, some of the other clips move too! That’s not to say that the actual strips move on the timeline–not visibly. But when played back, the other audio strips occur at different times than they did before. So basically the VSE is lying to me about what is happenning when. The representation isn’t accurate–in some cases not remotely accurate. Pretty bizarre. Anyone have any idea what the problem is? Drives me nuts when I’m trying to throw together what should be a simple sequence.

Pretty bizarre indeed. Can you post a screenshot of one of your projects where this is happening?
I can think of several reasons why this is happening. For instance, is the AVsync mode on?
Which Blender version are you using and on what OS? Have you tried a stable version?
What’s the audio codec you’re importing? How about coverting the audio to pcm and then importing it?
As you say such a task should be very simple and it actually is.

Okay, I didn’t have AV-sync on, nor did I know it existed. Now timing is good, but will video frames be dropped in the actual render? I don’t want that.

No, the final render will be fine.

Okay, so to determine where the video clips start and stop (in terms of frames), I should watch it in No-sync mode, and then to position the strips, I should switch to AV-sync, am I understanding correctly?

You’ve changed the subject from an audio-related error to playback issues. These are different things which may or may not be related (depending on the video codec used, system cpu etc).
Trust me, you never ever want to use the no-sync mode to edit a project for audio will most definitely be out of sync so you’ll end up waisting time and effort. So the sync mode should always be set to AVsync.
Now, if you observe frame dropping or other lags, then either you’re using HD or fullHD highly compressed video as input or your system is unable to decode the video in realtime for smooth playback (or both). In this case using proxies is the recommended solution.

As NemoD mentioned above, even if you notice delays the final render will be just fine. You can test this by setting the resolution to 20% and render a part of your project.

Interesting. My system is a quad-core AMD A10 with 8Gb of RAM. Id think it would handle the task with ease, but it’s worth considering that I have many applications such as IRC, a media player, an internet browser, an image editor, and a file uploading utility open at the same time (though none of them are–or should be–undergoing processes at the time I am editing video sequences).

You can test this by setting the resolution to 20% and render a part of your project.

BRILLIANT! Will do!

Due to cacheing (background task of storing frames after display), sometimes non-sync can appear to play back more smoothly, but will tend to drift over time (during general playback).

So here’s the next question I have, then. In one of my image strips, it appears (I cannot verify) than the dropped frames are taken off the end of the strip, rather than interdispersed. So if this is the case, how do I adjust timing so as to know that audio and video are synced to my liking if frames are taken from the end of a strip? This would mean that the time the video strip takes to play is accuractely represented, but which frames render when is not.

Oh I have never seen truncated video to match length? Is that how the new function works? I thought that it was like the speed effect? Apparently not then. Try a really aggressive frame rate, like 10fps over 1 minute sequence (render it out with audio and bring it back) to check how bad this is.

Will do, and I forgot about blendercomp’s advice to lower resolution, too.

I’ve learned to expect all kinds of problems with video files but if this is an image sequence we’re talking about here just create proxies and everything will be good to go. If it still fails then just convert the audio to pcm and work with that. You could even render out the audio separately and then remux with video. Possibilities are endless but this is way over the top for such a simple task.
You still haven’t specified the blender version and OS. That’s always helps as there are more options on the linux side of things.

Finally, your system is a high end one and this should be a trivial task regardless of what other apps you’re running.

please post a screenshot where the settings used are visible or render out a very lowres video (say 10%) and upload it somewhere so that we can have a look at it