I fairly recently finished and uploaded my second sound visualizer in blender. I have gotten a lot of good feedback on it from various places but would really like some more. I would really appreciate feedback and other comments about the quality, concept and execution. Feel welcome to ask any questions.
-Basic overview-
To make the visualizer I used the sound modifier on a heavily subdivided plane. Set the speed to zero so that it wouldn’t fluctuate and bound the height value and the narrowness to different song frequencies using the bake sound to f-curves operation. I then applied a procedural texture to displace the objects surface so that it creates the fractal patterns seen during most of the video. Then I set the mapping of that texture to use an empty for its location. By baking all frequencies with accumulation to this empty it moved the mapping of the texture with the intensity of the music overall. The background is simply the songs album art composited into the alpha and faded in with Gaussian blur and a spherical blend texture. If you have any more questions please ask below.
Perfect visualizer!
I wanted to do this in the past, but i didnt know how…
So thx for description, but i still dont know how do this. :spin:
What modifiers exactly did u use? Is it wave deform with texture changed by sound frequences? or other modifier?
last question: could u do this? or describe how he create this?
Wow that one looks really cool. I’m not entirely sure how they did it though. If I had to guess I would say that they either wrote a complicated script to control some range of modifiers that effected different parts of the plane, a difficult task with just the built in tools. However the description says that they used a custom add-on and a lot of Fourier transforms which I am not entirely well versed in, but know that they are a method of converting between a function of time and a function of frequency. So in this case converting the frequencies and decibels of the song into a time function who’s output is the horizontal shift, and the height of a point on the plane. So if the song at 1 point in time is at a decibel of .7 and a frequency of 13khz the output would translate to a time value equal to that of the song, a shift horizontally to wherever on the plane 13khz is represented and then displaced to the height either equivalent to the decibel amount or some related scale. As far as the representation of this is concerned there are a lot of possibilities It is possible that each end of the plane is not an actual end but instead a point where it hits a mask object or something similar that makes it disappear. Thus allowing the entire plane to have the song “baked” on it before hand, then scrolled across the viewing area at whatever speed followes the audio. Honestly if you really want to know how this works you will have to contact the creator, I can only speculate. Thanks for your feedback!
P.S.
I looked further into the comments of the YouTube link and it seems that the creator generated everything purely with the Fourier transforms beforehand. In addition he gave information on how to contact him and offered to show anyone the code.