Telepresence projects include:
Video-based reconstruction offers the potential of faithfully communicating both appearance and attention. Video conferencing can faithfully communicate what some looks like but not what he/she is looking at. Immersive collaborative virtual environment do the opposite. We are both building and studying the use of free viewpoint 3D video reconstruction from multiple 2D video streams. We have implemented both view dependent volumetric and view independent polyhedral approaches to reconstructing form.
We have implemented both reconstruction algorithms and multi-stream capture that allow real time frame rates for reconstructions from above 10HD colour cameras. We have built a unique test facility, the octave, which allows both immersive display and capture to support this research.
3D reconstruction using image manipulation and polyhedral based reconstruction
We create 3D models at real time frame rates using our parallelised implementation of an EPVL algorithm. Images are captured and backgrounds segments both in real time by our multi-video capture/manipulation system.
We have been studying the impact of camera placement on the ability of Vision Based Reconstruction to faithfully communicate appearance and attention.
The reconstructions displayed above were created from images captured within our octave display and capture simulation.
Silhouette cones used to reconstruct shape of what is being captured through multiple cameras.
Image displaying the impact of good camera placement on shape reconstruction
Impact of poor camera placement on shape reconstruction
A set of four experiments have been carried out to begin to answer these questions. The results show that such places are likely to be found both a comfortable and useful place to be and work.
The use of interactive elements in buildings is on the rise in design circles and these buildings are proving to be popular in both private and public projects. This research attempted to study the effects of interactive or seemingly intelligent architecture might have on its inhabitants in terms of experience and productivity, through the use of virtual environments. To do that a series of research questions attempted to identify:
This research concluded that interactive or seemingly intelligent architecture is architecture that can react to its users and change its properties (colour, shape, sound …) in real time and that virtual environments are a suitable test medium as people tend to behave in them as they would in real life, whether they are lab or online based. The experiments performed in this research all confirmed the hypothesis that a seemingly interactive or intelligent architecture had a positive effect on users performance. Above 90% of test subjects performed better when the walls around them moved. The experiments also showed that such types of architecture are more appealing for people to stay, work and socialize in.
All this suggests that the presence of interactive or intelligent elements within a building is likely to have positive effects on its users. Increasing the productivity, comfort and sociability of its inhabitants (this sentence seems incomplete). The transferability of these results to the real world is yet to be tested. The experiments made here coupled with the increasing popularity of such types of buildings in real life suggests that such tests are a very viable option for future experimentation.
Eye catching was an EPSRC funded project that sought to communicate eye gaze across a distance through video and computer graphic technology. The partners were University of Salford, UCL, University of Reading, Univesity of Roehampton, Electrosonic, Visual Acuity, Avinti Screen Media and SGI.
Environmental, economic and social pressures push for a reduction in dependence on travel yet in many situations there is no substitute for a face-to-face meeting, despite advances in CSCW. Eye gaze is important in many interactions, for example in the building of trust and rapport, and in gauging focus of attention in conversational turn taking or when discussing shared artefacts. Conventional video conferencing does not attempt to support communicational eye gaze but approaches are emerging that appear to address this when reproducing a round the table experience across a distance. The project "Eye-catching" freed participants from their seats and allowed them to move around a fully shared space while still supporting communicational eye-gaze. This was achieved by integrating eye-tracking into a custom built immersive collaborative environment (EyeCVE), so that life sized avatars, which follow the movements and gaze of remote people, are projected into the local space. Our experiments gauged people's ability to discern what and who others were looking at, looked for signs of naturalness of eye-gaze behaviour, and measured task performance. Scenarios included a round the table interview, gauging what or who was looked at, and an object focussed puzzle. To compare our approach to video conferencing, we painstakingly arranged high definition cameras and large screens, carefully picking and adjusting lenses to provide optimal alignment across the remote spaces.
Evaluate the role of eye-gaze in tele-communication so as to better design future communication technologies.
All the original objectives were met as we:
While both the end-to-end latency and update rate of EyeCVE could have been perceived, there is no evidence that they were. Those of the HD video conference were not perceivable but unlike EyeCVE this was not tested on a public network. Display and capture latencies in both systems were above that induced by the network and computation.
Eye-catching was a collaborative project. The contribution of The University of Salford was project co-ordination, development of the core software of EyeCVE, initial integration of eye-tracking with stereo glasses, assisting other partners in developing and integrating software and hardware and designing experiments, and leading the experiments that measured ability to interpret gaze, the comparisons between EyeCVE and video conferencing, and the real time performance of both approaches.
We wish to thank the sponsors The EPSRC (Grant EP/E007406/1) INTUITION