A Multiple Input Single Output Model for Rendering Virtual Sound Sources in Real Time (bibtex)
by Panayiotis G. Georgiou, Chris Kyriakakis
Abstract:
Accurate localization of sound in 3-D space is based on variations in the spectrum of sound sources. These variations arise mainly from reflection and diffraction effects caused by the pinnae and are described through a set of Head-Related Transfer Functions (HRTF’s) that are unique for each azimuth and elevation angle. A virtual sound source can be rendered in the desired location by filtering with the corresponding HRTF for each ear. Previous work on HRTF modeling has mainly focused on the methods that attempt to model each transfer function individually. These methods are generally computationally-complex and cannot be used for real-time spatial rendering of multiple moving sources. In this work we provide an alternative approach, which uses a multiple input single output state space system to creat a combined model of the HRTF’s for all directions. This method exploits the similarities among the different HRTF’s to achieve a significant reduction in the model size with a minimum loss of accuracy.
Reference:
A Multiple Input Single Output Model for Rendering Virtual Sound Sources in Real Time (Panayiotis G. Georgiou, Chris Kyriakakis), In Proceedings of ICME 2000, 2000.
Bibtex Entry:
@inproceedings{georgiou_multiple_2000,
	address = {New York, NY},
	title = {A {Multiple} {Input} {Single} {Output} {Model} for {Rendering} {Virtual} {Sound} {Sources} in {Real} {Time}},
	url = {http://ict.usc.edu/pubs/A%20MULTIPLE%20INPUT%20SINGLE%20OUTPUT%20MODEL%20FOR%20RENDERING%20VIRTUAL%20SOUND%20SOURCES%20IN%20REAL%20TIME.pdf},
	abstract = {Accurate localization of sound in 3-D space is based on variations in the spectrum of sound sources. These variations arise mainly from reflection and diffraction effects caused by the pinnae and are described through a set of Head-Related Transfer Functions (HRTF’s) that are unique for each azimuth and elevation angle. A virtual sound source can be rendered in the desired location by filtering with the corresponding HRTF for each ear. Previous work on HRTF modeling has mainly focused on the methods that attempt to model each transfer function individually. These methods are generally computationally-complex and cannot be used for real-time spatial rendering of multiple moving sources. In this work we provide an alternative approach, which uses a multiple input single output state space system to creat a combined model of the HRTF’s for all directions. This method exploits the similarities among the different HRTF’s to achieve a significant reduction in the model size with a minimum loss of accuracy.},
	booktitle = {Proceedings of {ICME} 2000},
	author = {Georgiou, Panayiotis G. and Kyriakakis, Chris},
	year = {2000}
}
Powered by bibtexbrowser