the gui is gone as soon as I pin the data properties panel to the speaker
the view docs hack works fine, but SoundDrive(ChannelX) doesn’t work as an expression here, something is broken… should it show in bpy.app.driver_namespace.keys()? I just used the var name
about that lipsync paper, it is some sort of speech recognition… like I train blender and then just say ‘add monkey’? anyway, fun to see good all moho switch layers beeing used here
Hopefully driver namespace is fixed… still reports error as code to fix is run after.
Realtime display in panel.
Bad freqency bakes dealt with a little better.
Lipsync taken out for now. Moving to pose markers from timeline markers.
Totally overwritten old speaker properties UI Panels.
Visualising
Allow for up to 72 channels
Ability to squeeze the UI display.
Ability to show / hide the frequency column.
Custom channel names. This will help when the NLA is used to drive with multiple actions.
Cropping . Rebake to the selected frequencies.
Detect bad bakes and auto set for crop
Automatic driver adding using overwritten view docs operator
Select the speaker to driver from in the toolbox
Equalizer view for context speaker in toolbox.
Live changes while animating from channel select box.
Although it works ok i need to find why I now have an error with reg/unreg . It’s handled ok with try / except…
Drivers.
Some work done on driver panel.
To Do’s
Will add facility to normalize, envelope etc.
The driver function can then be used to rebake the action if desired.
Lipsync.
Moved the NLA phoneme markers onto the action rather than scene. Still very much a WIP.
Note the actions are being baked from frame 1. The speaker sound will be added at the frame you are on. Will change this. The concept is to set it up in frame 1 for use in the NLA later. You will need to move the soundclip to frame 1 manually.
I really didn’t know that there was access to frequencies in an audio file, and filter them eg. low pass resample etc. I love that you can read them then use them to drive objects. But is it possible to feed the resulting f-curve back to the sound? Does the sequencer API (audaspace?) allow you to alter the audio at all?
In theory i think manipulating the sound is possible, the mixdown is an example of this, taking all the input speakers and producing one track. It’s a bit beyond the scope of what i’m doing atm.
Thankyou for the feedback. When I asked Nexyon, he just said, sure there are filters, but didn’t elaborate. I also suggested doing a spatial surround mix from 3D view with speakers, a bit tongue in cheek but I thought it seemed cool
I tried SpeakerTools5 and it did work, somewhat. I see some strange things with alignment on my system. For instance, I add a speaker object when the frame playback head is at 300. Then I bake with your tool and some frequencies are reported not to bake (nothing over 10k). And while your visual animated display starts at frame #0, the actual song does not start playing until frame #300 so there is a disconnect. I like the tool but I am still not confident that Blender process audio correctly. (or video for that matter).
Yeah I will re-sync it in next version… thought about it at the last minute after uploading. For the moment go to the NLA and drag the soundclip over to frame 1. When (if) i get it cleaned up the concept is to set up everything with frame1 as a reference and then position them with the soundclip in the NLA. (one or many) via a UI.
The frequencies that are not baked have either blown up in the bake operator or there is no difference between minimum and maximum value in that channel.
Not quite sure on what you mean here?
You can adjust the start and end sliders on the frequency display to display a narrower range.
If you mean mute the sound… I’m not currently adjusting the sound at all. In a previous post to 3ptEdit there are filters available. I’ll investigate.
EDIT… investigated …yeah looks like its quite possible.
import bpy
import aud
device = aud.device()
f = aud.Factory(bpy.data.speakers["Speaker"].sound.filepath)
#play the sound
#device.play(f)
#low pass / high pass test (matches a channels in my test file channel15=low:2150,high:2861)
device.play(f.lowpass(2150).highpass(2861).volume(1.0))
Having a little problem. I dropped the speaker_tools v5 folder into the addons (of both 2.61 and 2.62) and it’s not showing up in the user preferences (either of 'em). Odd thing is, the EQ2 sample file works just fine. What am i missing here ?
Thanks for the interest. The add on is in Testing / anim. There have been a lot of changes since the EQ2 concept file. The actions created with EQ2 will not be compatible with the UI of the new version.
If you drop it into the addons folder while you have a blender running hit F8 to reload plugins. If there is an error loading it will be displayed in info / system console.
Was hoping to be further down the line with this. THere’s been a few posts lately re driving properties with sound so thought I’d pull the pin, find out what was the bug I had from 2.62 to 3 and post.
Here is a cut down version for 2.63 to have a look at.
Zipped this too early… going to post it anyway.
The preset stuff is a real WIP… forgot to comment it out.
I’m looking at different set ups for VOICE / MUSIC / SFX
To use the right click to override “Online Python Reference” click the ON on toolbox header
The checkbox on toolbox menu should be removed. It points to some render property.
Made a drivers list on toolbar. You can remove / edit from there.
Allowed for up to 92 frequency channels.