Acoustic Source separation



Researchers at the Centre for Communications Systems Research, University of Surrey, have developed a novel method for isolating multiple sound sources in a noisy environment. Sound sources can be individually separated, emphasized, suppressed, or modified and then recombined in any 3D spatial configuration. All processing is done in real-time and no prior knowledge of the number or location of the sources is required.



The intensity vector method supports numerous important advantages over conventional BSS techniques.



Conventional BSS techniques

Intensity vector analysis

Number of sources

The number of sound sources is limited to the number of microphones.

An infinite number of directivity functions can be calculated to separate more sources than microphones, although the performance would be limited. For large number of sources, it may be more practical to use fixed directivity functions for each window, as their calculation would be computationally demanding.

Moving sources

Independent component analysis requires the sound sources to be stationary.

Real time separation is achievable within 25 msec. Thus the system can lock-on to moving sound sources.


The accuracy of time-delay-of-arrival techniques generally increases with the size of the microphone array.

The physical separation of microphones in the array must be small compared to the acoustic wavelength in air. Source separation performance improves with smaller microphone arrays, such as those manufactured using MEMS.



  • Hearing-aids: Listening to selected sounds/conversations and improved speech intelligibility for hearing-impaired.
  • Teleconferencing: Speaker localization, volume equalization or selective enhancement.
  • Mobile phones: Environmental noise and interference suppression.
  • Speech recognition: Pre-processing to improve signal-to-noise ratio.
  • Broadcasting: Real-time audio capturing and synthesis for 3D TV productions, ensuring spatial synchronicity of sound and picture, including multi-view rendering.
  • Audio post production: Audio personalization, automated dialogue replacement, volume balancing.
  • Immersive remote collaboration: Selective transmission of multiple speech sounds and their processing for 3D reproduction.
  • Automotive: Noise and acoustic echo cancellation.
  • Surveillance: Automatic detection of sound sources and camera zooming. Automatic keyword / threat detection in noisy, multi-speaker environments such as airports.
  • Biometrics: Pre-processing to improve speaker identification.


Available for licence


IP Status

Patent Pending



Blind source separation (BSS) is performed using acoustic pressure gradients derived from a small array of condenser microphones, or obtained directly from commercial B-format tetrahedral microphones. Time-frequency representations of the pressure and pressure gradient signals are calculated using a modified discrete cosine transform or fast Fourier transform. These are used to derive intensity vector directions. Beamforming is applied using a directivity function defined for each sound source and time-frequency bin. Finally, individual time-domain signals are obtained using an inverse modified cosine transform or inverse fast Fourier transform.


For more information, videos and downloads relating to this technology please go to

Patent Information:
For Information, Contact:
Will Mortimore
University of Surrey
© 2024. All Rights Reserved. Powered by Inteum