The paper Towards Open Science in Acoustics: Foundations and Best Practices by Sascha Spors, Matthias Geier and Hagen Wierstorf presented at the annual meeting of the German acoustical society (DAGA) discusses the open science approach and its application in acoustics. The paper and presentation, as well as its sources are available as Open Access on GitHub.
H Wierstorf, A Raake, S Spors, “Assessing localization accuracy in sound field synthesis,” The Journal of the Acoustical Society of America 141, p. 1111-1119 (2017), 10.1121/1.4976061
It is published as open access (CC BY 4.0), so feel free to download the PDF version.
The following additional material is available as well:
Stimuli for the listening tests
Average and single results from the listening tests
Code to reproduce the figures
Sound field synthesis methods like Wave Field Synthesis (WFS) and Near-Field Compensated Higher Order Ambisonics synthesize a sound field in an extended area surrounded by loudspeakers. Because of the limited number of applicable loudspeakers the synthesized sound field includes artifacts. This paper investigates the influence of these artifacts on the accuracy with which a listener can localize a synthesized source. This was performed with listening tests using dynamic binaural synthesis to simulate different sound field synthesis methods and incorporated several listening positions. The results show that WFS is able to provide good localization accuracy in the whole listening area even for a low number of loudspeakers. For Near-Field Compensated Higher Order Ambisonics the achievable localization accuracy of the listener depends highly on the Ambisonics order and shows large localization deviations for low orders, where splitting of the perceived sound source was sometimes reported.
A new version of our Sound Field Synthesis Toolbox for Matlab/Octave is available. The highlights of the new release include a correction of the absolute amplitudes in WFS and a new and improvement point selection for HRTFs/BRIRs interpolation which should now work for almost all 2D and 3D data sets.
Download the SFS Toolbox 2.3.0 and have a look at the online documentation how to use it.
- default 2D WFS focused source is now a line sink
- improve point selection and interpolation of impulse responses
- speed up Parks-McClellan resampling method
- change default value of conf.usebandpass to false
- rename conf.wfs.t0 to conf.t0
- rename and improve easyffft() to spectrum_from_signal()
- rename and improve easyifft() to signal_from_spectrum()
- correct amplitude values of WFS and NFC-HOA in time domain
- fix default 2.5D WFS driving function in time domain
- add time_response_point_source()
- update amplitude and position of dirac in dummy_irs()
- fix missing secondary source selection in ssr_brs_wfs()
- add amplitude terms to WFS FIR pre-filter
- fix Gauss-Legendre quadrature weights
- add delay_offset as return value to NFC-HOA and ir funtions
- fix handling of delay_offset in WFS time domain driving functions
On the 24th European Signal Processing Conference (EUSIPCO) conference we presented the contribution
Winter, F.; Spors, S. (2016): “On Fractional Delay Interpolation for Local Wave Field Synthesis.” In: Proc. of the 24th European Signal Processing Conference (EUSIPCO), 2016.
Additional Material can be found here.
Wave Field Synthesis aims at the accurate reproduction of a sound field inside an extended listening area which is surrounded by individually driven loudspeakers. Recently a Local Wave Field Synthesis technique has been published which utilizes focused sources as a distribution of virtual loudspeakers in order to increase the reproduction accuracy in a particular local region. Similar to conventional Wave Field Synthesis, this technique relies heavily on delaying and weighting the input signals of the virtual sound sources. As these delays are in general not an integer multiple of the input signals’ sample rate, delay interpolation is necessary. This paper analyses in how far the accuracy of the delay interpolation influences the spectral properties of the synthesised sound field. The results show, that an upsampling of the virtual source’s input signal is an computationally efficient tool which leads to a significant increase of accuracy.
The paper Improved Driving Functions for Rectangular Loudspeaker Arrays Driven by Sound Field Synthesis by Sascha Spors, Frank Schultz and Till Rettbergs derives improved driving functions for rectangular loudspeaker arrays by applying the equivalent scattering approach. The following supplementary data has been published together with the paper:
A new release of the Two!Ears auditory model is available
You can download the release on the Two!Ears website. Check out the installation guide.
This release fixes mainly bugs and adds the following two new features:
* Improve DnnLocationKS to better predict location for synthesized sound fields
* Results from a paired-comparison test investigating listening preference for WFS and stereo
TWO!EARS is a project funded by the Seventh Framework Programme (FP7) of the European Commission, as part of the Future Emerging Technologies Open Programme “Challenging current thinking” (call FP7-ICT-2013-C).
Since our last announcement here at spatialaudio.net, two new releases of the Sound Field Synthesis Toolbox for Matlab/Octave have happend. The highlights of the new versions include a new online documentation at http://matlab.sfstoolbox.org and a new online theory documentation at http://sfstoolbox.org, which is directly linked in the corresponding code sections. Other big changes include a switch to the MIT license, more fractional delay methods, a new linear interpolation method for HRTFs and an update to the default WFS driving functions.
Download the SFS Toolbox 2.2.1 and have a look at the online documentation how to use it.
- fix delayoffset for FIR fractional delay filter
- add findconvexcone()
- simplify convolution()
- add linear interpolation working in the frequency domain
- fix pm option for delayline()
- fix impulse response interpolation for three points
- add the ability to apply modal weighting window to NFC-HOA in time domain
- change license to MIT
- update delayline() config settings
- add Lagrange and Thiran filters to delayline()
- replace wavread and warwrite by audioread and savewav
- convolution() excepts now two matrices as input
- allow headphone compensation filter to be a one- or two-channel wav file
- add new online doc at http://matlab.sfstoolbox.org/
- fix greens_function_mono() for plane wave and 3D
- replace conf.ir.useoriglength by conf.ir.hrirpredelay
- update default WFS driving functions
- add links to equations in online theory at http://sfstoolbox.org
Digital signal processing is underlying many techniques for the processing of audio signals. The lecture notes to our masters course Digital Signal Processing are available as Open Educational Resource. The materials are provided in the form of jupyter notebooks featuring computation examples written in IPython 3. The materials can be inspected
The sources of the notebooks, as well as installation and usage instructions are available on GitHub. You can give the repository on GitHub a Star if you like the notebooks. You are invited to contribute by reporting errors and suggestions as issues or directly via Sascha.Spors@uni-rostock.de. I am also looking forward to ideas for new examples or topics.
The doctoral thesis
Frank Schultz (2016): “Sound Field Synthesis for Line Source Array Applications in Large-Scale Sound Reinforcement”, University of Rostock, URN: urn:nbn:de:gbv:28-diss2016-0078-1
was finally released.
Abstract: This thesis deals with optimized large-scale sound reinforcement for large audiences in large venues using line source arrays. Homogeneous audience coverage requires flat frequency responses for all listeners and an appropriate sound pressure level distribution. This is treated as a sound field synthesis problem rather than a directivity synthesis problem. For that the synthesis of a virtual source via the line source array allows for interpreting the problem as audience adapted wavefront shaping. This is either achieved by geometrical array curving, by electronic control of the loudspeakers or by ideally combining both approaches. Obviously the obtained results depend on how accurately an array can emanate the desired wavefront. For practical array designs and setups this is affected by the deployed loudspeakers and their arrangement, its electronic control and potential spatial aliasing occurrence. The influence of these parameters is discussed with the aid of array signal processing revisiting the so called wavefront sculpture technology and proposing so called wave field synthesis as a suitable control method.
We are pround to announce release 0.3.1 of the Sound Field Synthesis Toolbox for Python.
This release features
- Calculation of the sound field scattered by an edge
- Various driving functions for sound field synthesis using an edge-shaped secondary source distribution
- Several refactorings, bugfixes and other improvements
The Python port of the Sound Field Synthesis Toolbox features the calculation of the synthesized sound field for various sound reproduction methods for the monofrequent case. Functionality for visualization of sound fields, as well as a set of auxiliary functions is included. The documentation provides installation instructions, usage examples and details on the API.