Despite the advance of visualisation and auralisation research over the last few decades, the fusing together of the two technologies is still unsatisfactory. What often happens is a world-class facility in one area (e.g. vision) has less than excellent facilities in the other (e.g. sound). This places severe limits on the realism VR simulations, the virtual environments they create, especially how immersive it is. At Salford we have been working to integrate auralization and visual simulations together for the last few years. We are about to complete a near million pounds upgrade on both our acoustic and visualization research facilities to create a infra-structure that would allow multi-modal virtual reality simulation with both world-class acoustics and visualisation. This PhD research project will use these upgraded facilities to address interfacing questions, as well as the impact of using different complexity of configurations on realism. The aim of the doctoral research would be to develop different integration strategies that can suit differing requirements of applications such as teleconferencing, virtual studios, and virtual workspaces.
Some CDs sound great and some don’t: the sound quality of audio programme material is very variable. Expert and naïve listeners are quite good at picking up these differences in sound quality. However, so far there are no metrics that can quantify if a given music track is of good quality or not. This PhD project aims to define and extract quality features from audio signals that enable an automated rating of the acoustic quality therein. The technical aspects of the research project will be underpinned by a substantial study of human factors that determine perceived quality in sound and audio production. The foreseen outcomes are: 1) A framework that sets the relative importance of various objective acoustic measures of signal content in the context of human listening; 2) A digital tool that automatically rates and improves audio quality in a given stream. Applications of the knowledge and technology span from automated adjustment to different reproduction scenarios (eg: radio speech in a car vs. live sound) to archive recovery.
At the mixing stage in Audio Production, the professional option is to use expensive, specialist facilities. However, an increasing amount of sound production work gets done ‘on the move’ using laptops or working in offices. There is a need for better sound monitoring through headphones. Currently, headphone monitoring has its own inherent problems both in stereo and multichannel (e.g. 5.1) program presentation.
This PhD acoustics project proposes to develop a critical monitoring system for headphone reproduction. Research will investigate advanced digital signal processing techniques alongside a study of human listening factors to produce a system that enables sound engineering work outside a controlled studio environment. The outcomes of the doctoral project will result in hardware or software application crucial to the current trends of the sound production industry.