Abstract:
An immersive audio system oriented to future communication applications is presented. The aim is to build a system where the acoustic field in a chamber is recorded using a microphone array and then is reconstructed or rendered again, in a different chamber using loudspeaker array based techniques. In order to reduce the enormous bandwidth necessary to deal with this setup, our proposal relays on recent robust adaptive beamforming techniques and joint audio-video source localization for effectively estimating the original sources of the emitting room. The estimated source and the source localization information drive a Wave Field Synthesis engine that renders the acoustic field again at the receiving chamber. The overall system performance is tested using a MUSHRA-based subjective test in a real situation.