E.A.R: Evaluation of Acoustics using Ray-tracing

Hi all,

I have written a ray-tracer to simulate the propagation of sound. My primary goal was to give architects a tool to investigate the aural experience that unfolds as visitors navigate through the building. But I think it has evolved into a tool that it useful to everyone with an interest in computer graphics, film making and special effects.

I invite everyone give it a try, it is called E.A.R: Evaluation of Acoustics using Ray-tracing. I tried to make things user friendly and write documentation, but man that’s a lot of work. If you have any suggestion or questions you know where to find me.

The code is hosted at github: https://github.com/aothms/ear
And there is a website called http://www.explauralisation.org/

I have created two example videos, material can be downloaded from the website. But I’m sure you can create way better work with EAR.


The code is a combination of C++ and Python. The addon consists of a lot of bpy interface work to collect all the auditory information for the scene. A binary application is used to render all these settings into a sound file.

Kind regards,
Thomas

2 Likes

Very nice, indeed!

Nice, I’m assuming this works with speaker objects?

This is really helpful, thanks!

Thanks :smiley:

Good remark, but no, the speaker object was an addition to Blender long after I started working on my project, first working version was for Blender 2.56. My sound sources are just omni-directional empties. The speaker objects are a great way to integrate my addon even further into Blender, so I was thrilled to see them pop up, and will definately use them in future versions!

1 Like

Very interesting, thank you !

Would it be also possible to simulate specific sound barriers (eg. sound propagation within a room -from differentr sources and ranges) and then absorption, reflection etc. through the barriers around ??

Hi! Well, As the name implies, EAR uses Ray-tracing as an approximation for the propagation of sound. From a physical stance that is not entirely correct.

You can set up a scene according to your description and account for absorption, reflection and transmission, but diffraction (the effect that especially low frequencies will also bend around corners) is not reproduced.

Nevertheless, in my first example, if I would have used more absorbent material the results would have been drastically different and fairly accurate.

This is very cool.
It reminds me a bit about a program we hace tried at out school (architecture) where you could visualize the movements of soundwaves in 3d space. (for concert halls etc).
Even though you say that your implementation does not take account of diffraction (and therefore might not be physically correwct) I feel you idea is better because the accoustics in a room is more about hearing than seeing.
The visualization was pretty nice for evaliating if the sound was reflected correctly through a biig hall or to avoid sound delay effects though. It was done as waves of small balls which color changed to show the intensity of the sound.
Anyway do you have a roadmap/ further for this ideas?

Thanks ejnaren. From what you describe I think you’re talking about Autodesk Ecotect, or perhaps Odeon? Anyway, using a graphical representation has it’s advantages as well. It gives a good overview entire distribution of sound throughout the entire space, rather than from a single perspective as what I do. Also it is not sensitive to the kind of loudspeaker of headphone used.

At this moment I do not have a real roadmap, I worked quite long on this, I’m hoping to get some feedback and then reorient to see what’s interesting improve on. It could geared towards film making, such as surround sound output, or more towards architectural acoustics, or improve on the accuracy/correctness. Also, needless to say, any contributions to the code as well other feedback/suggestions are very welcome :slight_smile:

I created a video with stereo effects. Overall the stereo effects are rather subtle (I think because of the enclosing walls omni-directional reflection quickly get the upperhand). But if you watch it towards the end, I think the ball bouncing towards the middle is quite noticeable.

I’ve also updated the addon, you can find it on github: https://github.com/aothms/ear/downloads

Aothms, this is totally awesome. I teach at an Architecture School and I can’t wait to start getting students to use this. They’re going to love it.
I’ve been hoping someone would do this for ages as I am creating a suite of building analysis tools as addons to Blender as well. The first component is LiVi http://arts.brighton.ac.uk/research/office-for-spatial-research/projects2/livi which works as an interface for lighting simulation. I wanted eventually to have an acoustics component as well but was not sure how to do it so, with your permission, and I’d like to look at integrating EAR at some point in the future.
I have a couple of comments to make:
As my tools are mainly about visualising environmental data within Blender I’d be very interested in how and which data can be extracted from EAR for representation (like the animation of data from a grid of listeners you have on your website). Also, my render and processing times are quite long, even on a well specced machine, and it would be good to have the option to bring this down, even at the expense of accuracy, so that my students can iteratively play with design and material choices before doing a final, high accuracy audio render. Lastly I get Blender seg faults on my linux 64bit machine which seem to originate from the python add-on. Let me know if you need a more comprehensive bug report.
Finally, many thanks for this and great work.

Great initiative, rareg! Great to see Blender used in academic circles!

The main reason why things take so long is because of the sampling frequency (or temporal resolution) of 44.1 kHz. If you are primarily interested in a visualization of the data, a couple of frames per second would be already be sufficient, speeding up the rendering process several orders of magnitude (at this moment sampling frequency is hardcoded in the source code).

I must add though that E.A.R is primarily geared towards providing an artistic indication rather than doing simulations. But there may also exist plans to extend the codebase with additional (more correct) models based on FEM and FDA. Another aspect is that the input and output of E.A.R at this moment is done using .wav files, whereas you might be primarily interested in Sound Pressure Levels? The visualization grid on my site is also done using a reinterpretation of the data in the .wav files.

That said, an iterative workflow of using simulation results in the design process seems very interesting. I’d be happy to provide a helping hand in regard to the integration process. Is there a specific workflow you have in mind that you would like to work towards?

The segfaults I have experienced with Blender, usually involve race conditions, often with undo/redo/reload with the addon enabled. I you could provide reproducible steps that lead to segfaults, please let me know.

This looks like a very interesting project :slight_smile:

Having this more tightly integrated into the standard Blender workflow to take advantage of materials, speaker objects and the camera as a listener to provide realistic sound effects would be great. Suddenly there would be little or no need to do a lot of sound post pro work for animations.

Two workflows that would be very handy for me would be to have the a test tone as a sound source comprising of the low, middle, and high frequencies you’re representing. From this test tone a relative dB level is outputted for each listener position. As the the test tone can be very short, and relatively few samples of it required (actually only covering a single wave at the lowest frequency?) this would be very quick and allow students to test for sound concentration, shadows and distortion in architectural spaces. I appreciate that the code is more geared towards artistic representation, but what is key for me is not the decimal place accuracy but the ability to quickly visualise the environmental consequences of design decisions, and as long as the environmental consequences look reasonable they serve their function for me.

The second workflow would very similar to the one you’re already put together, it would just be handy to initially shorten the rendering and processing times so that quicker feedback could be generated for early design decisions. This feedback should be based on fairly long anechoic recordings of typical usage scenarios like speech, music etc. as students will appreciate the aural quality of a space more with these accessible examples. Final rendering could then be with full sampling. In terms of presenting their work input audio samples of at least a minute should be processable.

Acoustics is not a specialism of mine so forgive me if some of this is off the mark.

In terms of errors in Blender 2.61 I get the following error when attempting to run the EAR executable
location:<unknown location>:-1
Traceback (most recent call last):
File “/home/ryan/.blender/2.61/scripts/addons/render_EAR/init.py”, line 419, in execute
if not run_test():
TypeError: ‘NoneType’ object is not callable

In 2.62 example 1 runs but my terminal repeats the error message below
Traceback (most recent call last):
File “/home/ryan/.blender/2.62/scripts/addons/render_EAR/init.py”, line 626, in draw_callback_px
if ob.is_listener: draw(‘ear’,ob_location,r,s)
File “/home/ryan/.blender/2.62/scripts/addons/render_EAR/init.py”, line 604, in draw
m = s.region_3d.perspective_matrix
AttributeError: ‘NoneType’ object has no attribute ‘perspective_matrix’

Example 2 segfaults Blender if I select the door sound source. After it segfaults in this way blender segfaults on opening example 2.

Example 3 segfaults upon opening.

I was thinking recently that audio raytracing should be possible, so your post put a smile on my face :wink:

Do you have any Plans or Ideas on Generating Sounds? Like on Collision of Object/Material1 with Object/Material2 generate Tones/Frequency …

Don’t have much idea on how this in Detail could work, just saw once a video from a University (i think) where they generated the Sound for breaking Ceran/Porcellan (?) Plate.

Thank you for your Work.

Hi monsterdog. I agree! With some minor adjustments the addon could be much more integrated and become part of a regular blender workflow. Feel free to think with me on this and provide a helping hand :slight_smile:

Hi rareg. I use the addon myself as well to design some spaces. And in doing so I truly experience the need for a low-fidelity mode to help iteratively adjust the design. I will see what I can do realize such a mode, but please bear with me, because at the moment I have some other priorities as well. I hope we stay in touch on this.

The segfaults you are experiencing are much more severe than I expected. I have a suspecion they arise from the way the addon hooks onto the viewport draw logic. I’ll look into it, but have experiences such segfaults myself on linux. Ideally a Python addon should not be able to crash Blender, but apparently that is the reality. If you want to, you can see yourself if disabling the viewport hooks prevents the segfaults. You can do this by editing the init.py file, remove the lines 645-652 (numbers based on most recent version of the script).

Hi bashi. For my third example I generated sounds from the location curves of a Blender Game Engine physics object: http://www.explauralisation.org/example3/ . But still simply points to a regular sound file on my hard drive. So it’s not the actual synthesis of sounds like your breaking porcelain example. I do have some research on the synthesis of scratching, rolling and breaking sounds, but no plans to incorporate it in this addon, quite complex stuff…

thanks for the information, and yes it’s probably to much/complex for a Blender Plugin. Nevertheless your Plugin is a great Tool.

No rush from my side. I wouldn’t start teaching with it til next term now (October) anyway, and besides what you’ve done is already brilliant and extremely useful.
I’m generally busy writing LiVi but may poke around in your python code, as well as try out your suggestion, to see if it fixes the seg faults. I notice on your website there was no contact details but if you want to email me at the address on the LiVi website I can then email you back any code changes I do make.
Cheers
rareg

I am SO PUMPED about this project!!

Hi bmud, thanks! Be sure to let me know if you take it for a test drive, or have any questions.

I know how to install an addon. I’m pretty darn sure I know how to do that. I’ve done it before.

File > Preferences > Addons > [Install Addon…] & pick the python file, my best guess was the init.py

That’s the procedure that I’m expecting to work. As far as I can tell, it isn’t getting installed. Fix that first.

Then… after i dropped it in to ~/Library/Application Support/Blender/2.6x/scripts/addons/ I noticed that the name of the addon includes periods in the acronym. “E.A.R.” so searching for “EAR” returns nothing. You might want to fix that too.

Just nitpicky things, I know, but they are make-or-break issues. I won’t be able to convince a novice to jump through these sorts of hoops.