Skip to main content

Telepresence

Telepresence projects include:

Eye gaze experimentation at the University of Salford

Video-based reconstruction offers the potential of faithfully communicating both appearance and attention. Video conferencing can faithfully communicate what some looks like but not what he/she is looking at. Immersive collaborative virtual environment do the opposite. We are both building and studying the use of free viewpoint 3D video reconstruction from multiple 2D video streams. We have implemented both view dependent volumetric and view independent polyhedral approaches to reconstructing form.

Approach

We have implemented both reconstruction algorithms and multi-stream capture that allow real time frame rates for reconstructions from above 10HD colour cameras. We have built a unique test facility, the octave, which allows both immersive display and capture to support this research.

 3D reconstruction of David Roberts using image manipulation and polyhedral based reconstruction

3D reconstruction using image manipulation and polyhedral based reconstruction

Volumetric reconstruction

Volumetric reconstruction

We create 3D models at real time frame rates using our parallelised implementation of an EPVL algorithm. Images are captured and backgrounds segments both in real time by our multi-video capture/manipulation system.

We have been studying the impact of camera placement on the ability of Vision Based Reconstruction to faithfully communicate appearance and attention.

The reconstructions displayed above were created from images captured within our octave display and capture simulation.

Silhouette cone placement

Silhouette cones used to reconstruct shape of what is being captured through multiple cameras.

Impact of good camera placement

Image displaying the impact of good camera placement on shape reconstruction

Impact of poor camera placement 

Impact of poor camera placement on shape reconstruction

Publications

  • Duckworth, T & Roberts, D J 2011, Accelerated Polyhedral Visual Hulls using OpenCL, in: 'IEEE Virtual Reality', IEEE, Singapore, Singapore, pp.203-204.
  • Duckworth, T & Roberts, D J 2011, Camera Image synchronization in Multiple Camera Real-time 3D Reconstruction of Moving Humans, in: 'Proceedings of 15th Int. Symp. On Distributed Simulation and Real Time Applications', IEEE, Salford, UK, pp.138-144.
  • Moore, C & Duckworth, T & Roberts, D J 2011, Investigating the Suitability of a Software Capture Trigger in a 3D Reconstruction System for Telepresence, in: 'IEEE/ACM Proceedings of 15th Int. Symp. On Distributed Simulation and Real Time Applications', IEEE, Salford, UK, pp.134-137.
  • Aspin, R & Roberts, D J 2011, A GPU based, projective multi-texturing approach to reconstructing the 3D human form for application in tele-presence, in: 'ACM Computer Supported Co-operative Working', ACM, New York, USA, pp.105-102
  • Aspin, R & Roberts, D 2010, An Exploration of Non-tessellated 3D Space Carving for Real-Time 3D Reconstruction of a Person through a Simulated Process, in: 'Visual Media Production (CVMP), 2010 Conference on', IEEE, London, UK, pp.151-160. Conference details: Visual Media Production (CVMP), 2010 Conference on, London
  • Moore, C & Duckworth, T & Aspin, R & Roberts, D 2010, Synchronization of Images from Multiple Cameras to Reconstruct a Moving Human, in: 'Distributed Simulation and Real Time Applications (DS-RT), 2010 IEEE/ACM 14th International Symposium on', IEEE/ACM, Fairfax, VA, USA, pp.53-60. Conference details: 2010 IEEE/ACM 14th International Symposium on Distributed Simulation and Real Time Applications (DS-RT), October 2010.

Adaptive lifelike architecture

  • What if the building you were in appeared to be alive or even intelligent?
  • What if its walls moved as if organic skin on an animal?
  • What if it tried to advise you?

A set of four experiments have been carried out to begin to answer these questions. The results show that such places are likely to be found both a comfortable and useful place to be and work.

The use of interactive elements in buildings is on the rise in design circles and these buildings are proving to be popular in both private and public projects. This research attempted to study the effects of interactive or seemingly intelligent architecture might have on its inhabitants in terms of experience and productivity, through the use of virtual environments. To do that a series of research questions attempted to identify:

  • What is meant by interactive or intelligent architecture?
  • Are virtual environments a suitable test medium?
  • What effects do virtual environments have on users?

This research concluded that interactive or seemingly intelligent architecture is architecture that can react to its users and change its properties (colour, shape, sound …) in real time and that virtual environments are a suitable test medium as people tend to behave in them as they would in real life, whether they are lab or online based. The experiments performed in this research all confirmed the hypothesis that a seemingly interactive or intelligent architecture had a positive effect on users performance. Above 90% of test subjects performed better when the walls around them moved. The experiments also showed that such types of architecture are more appealing for people to stay, work and socialize in.

All this suggests that the presence of interactive or intelligent elements within a building is likely to have positive effects on its users. Increasing the productivity, comfort and sociability of its inhabitants (this sentence seems incomplete). The transferability of these results to the real world is yet to be tested. The experiments made here coupled with the increasing popularity of such types of buildings in real life suggests that such tests are a very viable option for future experimentation.

Publications

  • Adi, M N & Roberts, D J 2011, The Use of Online Virtual Environments to Assess the Appeal of Interactive Elements within Buildings, in: 'Proceedings of Cyber Worlds CW2011', IEEE, Banff, Canada.
  • Adi, M N & Roberts, D J 2011, Building Interactivity, is It Appealing?, in: 'IEEE Int Symp VR Innovation VRSI', IEEE, Singapore, Singapore, pp.337-338.
  • Adi, M N & Roberts, D J 2011, Using VR to Assess the Impact of Seemingly Life Like and Intelligent Architecture on People's Ability to Follow Instructions from a Teacher, in: 'IEEE Int Symp VR Innovation VRSI', IEEE, Singapore, Singapore, pp.25-31.
  • Adi, M N & Roberts, D J 2010, Can you help me concentrate room?, in: 'IEEE Virtual Reality', IEEE, Waltham, USA, pp.133-144

Eye catching was an EPSRC funded project that sought to communicate eye gaze across a distance through video and computer graphic technology. The partners were University of Salford, UCL, University of Reading, Univesity of Roehampton, Electrosonic, Visual Acuity, Avinti Screen Media and SGI.

Summary

Environmental, economic and social pressures push for a reduction in dependence on travel yet in many situations there is no substitute for a face-to-face meeting, despite advances in CSCW. Eye gaze is important in many interactions, for example in the building of trust and rapport, and in gauging focus of attention in conversational turn taking or when discussing shared artefacts. Conventional video conferencing does not attempt to support communicational eye gaze but approaches are emerging that appear to address this when reproducing a round the table experience across a distance. The project "Eye-catching" freed participants from their seats and allowed them to move around a fully shared space while still supporting communicational eye-gaze. This was achieved by integrating eye-tracking into a custom built immersive collaborative environment (EyeCVE), so that life sized avatars, which follow the movements and gaze of remote people, are projected into the local space. Our experiments gauged people's ability to discern what and who others were looking at, looked for signs of naturalness of eye-gaze behaviour, and measured task performance. Scenarios included a round the table interview, gauging what or who was looked at, and an object focussed puzzle. To compare our approach to video conferencing, we painstakingly arranged high definition cameras and large screens, carefully picking and adjusting lenses to provide optimal alignment across the remote spaces.

Eye gaze experimentation at the University of Salford 

Aim

Evaluate the role of eye-gaze in tele-communication so as to better design future communication technologies.

Objectives and how they were met

All the original objectives were met as we:

  • Built and studied the world's first tele-collaboration system that supports two and three way communicational eye-gaze without restricting the movements of participants, by extending three IPTs to support eye-gaze in an immersive collaborative virtual environment (iCVE) that uses 3D computer graphics to represent the activity of remote people within the surrounding workspace.
  • Established what conditions are necessary and sufficient to support communicational eye-gaze in a tele-communication system, through perception tests and eye-gaze recordings. Specifically we found that correct perspective view of an evenly lit representation of the face to be necessary and often sufficient. In some circumstances representation of eye movement was also necessary.
  • Validated the support of eye-gaze in tele-communication by measuring its impact on collaboration, using collaborative task experimentation and behavioural analysis. We found that typical eye gaze practices were evident in users of eye-gaze enabled iCVE, but were unable to reproduce these in video conferencing.
  • Measured the impact of the technology by comparing eye-gaze in the shared workspace of the extended iCVE to that of AccessGrid and carefully aligned high definition video conferencing configured to best support eye-gaze. We found gaze can only be accurately discerned across video conferencing when all participants are constrained, for example, by sitting upright in carefully positioned seats. In contrast, we showed that the free viewpoint nature of iCVE allows gaze to be discerned between multiple moving people in iCVE.
  • Establish, through interactional analysis, that eye-gaze is important in conversations when people are stood close enough to see the whites of the eyes and at least one party is moving their eyes without head.
  • Established that eye gaze was critical when discerning which of a number of objects placed between two users was being looked at, when the gazer used little head movement.

Key findings

  • Video conferencing faithfully communicates what someone looks like whereas Immersive Collaborative Virtual Environments faithfully represent what they are looking at.
  • Gaze can only be accurately discerned across video conferencing when all participants are constrained, for example, by sitting upright in carefully positioned seats. All people positioned in front of a video wall think they are being looked at when a remote person looks at the image of someone close to the camera and thus none realise that they are being looked at when their image on the remote wall is to one side of the camera.
  • Immersive Collaborative Virtual Environments allow gaze to be accurately interpreted, through avatar representation, while a physically distributed group move around a virtually shared space.
  • The addition of tracked eyes in avatars significantly improves people's ability to discern gaze in some but not all circumstances within Immersive Collaborative Virtual Environments.
  • Tracked eyes are critical in discerning gaze of a person who is not turning their head to look at an object but head tracking seems sufficient when people turn their heads to look.
  • Other factors, such as lighting, resolution and contrast, play a large role in accuracy of gaze estimation, both in video conferencing and Immersive Collaborative Virtual Environments.
  • While tracked eyes consistently improved task performance, the statistical difference was not significant.
  • Avatar gaze was found to follow standard gaze practices.
  • Communication of eye-gaze requires both spatial and temporal alignment of people's viewpoint and actions.

While both the end-to-end latency and update rate of EyeCVE could have been perceived, there is no evidence that they were. Those of the HD video conference were not perceivable but unlike EyeCVE this was not tested on a public network. Display and capture latencies in both systems were above that induced by the network and computation.

Eye tracking equipment 

Outputs

Key Note

  • Face to Face, David Roberts, ACM Multimedia, Montreal, Canada, 2009

Invited Presentations

  • EPSRC, David Roberts, Poster, People in Systems Day, London, 2009
  • Eye-catching, David Roberts, Reality Centre Special Interest Group, 2007

Publications

  • D. Roberts, R. Wolff, J. Rae, A. Steed, R. Aspin, M. McIntyre, A. Pena, O. Oyekoya, and W. Steptoe, Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together,  in IEEE Virtual Reality 2009, p.p. 135-142, Lafayette, USA, 2009.
  • Steptoe, W & Okekoya, O & Murgia, A & Wolff, R & Rae, J & Guimaraes, E & Roberts, D J & Steed, A 2009, Eye-Tracking for Avatar Eye-Gaze Control during Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments, in: 'IEEE VR 2009, Lafayette, USA, 2009.
  • R. Wolff, D. Roberts, Alessio Murgia, Norman Murray, John Rae, Will Steptoe, Anthony Steed, and Paul Sharkey, Communicating Eye Gaze across a Distance without Routing Participants to the Spot, Proc. 11th IEEE ACM Int. Symp. Distributed Simulation and Real Time Applications DSRT, p.p. 111-118, Vancouver, Canada, October 2008. - 2nd in best paper award and shortlisted for journal.
  • A, Murgia R. Wolff, W. Steptoe, P. Sharkey, A Tool for Analyzing and Replaying Gaze-Enhanced Collaborative Sessions in CAVE-like Environments, D. Roberts, and E . Guimaraes, In proc. 11th IEEE ACM Int. Symp. Distributed Simulation and Real Time Applications DSRT, p.p. 252-258, Vancouver Canada, October, 2008.
  • W. Steptoe, R. Wolff, A. Murgia, E. Guimaraes, J, Rae, P. Sharkey, D. Roberts, and A. Steed, Eye-Tracking for Avatar Eye-Gaze and Interactional Analysis in Immersive Collaborative Virtual Environments, In ACM Proc. Conference on Computer Supported Cooperative Work, San Diego, 2008, pp. 197-200.
  • N. Murray, D.  Roberts, A. Steed, P. Sharkey, P. Dickerson, J. Rae and R. Wolff, Eye Gaze in Virtual Environments: Evaluating the need and initial work on implementation, In Concurrency and Computation: Practice and Experience, Wiley Publishers, ISSN 1532-0626, 2008.
  • N. Murray, D. Roberts, A. Steed, P. Sharkey, John Ray, Paul Dickenson, An assessment of eye gaze potential within immersive virtual environments, In ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), Volume 3 , Issue 4 December 2007. 1-17
  • Murray, N. and Roberts, D., Comparison of head gaze and head and eye gaze within an immersive environment, The 10th IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications, 2006. Winner of the best paper prize.

Partners

  • University of Salford
  • University of Reading
  • UCL
  • University of Roehampton

Investigators

  • David Roberts, Project Leader and Principle Investigator at University of Slaford
  • Anthony Steed, Principle Investigator at University College London
  • Paul Sharky, Principle Investigator at University of Reading
  • John Rae, Principle Investigator at University of Roehampton
  • Paul Dickenson, Co-Investigator at University of Roehampton
  • Norman Murray, Co-Investigator at University of Salford

Researchers

  • Robin Wolff, Adriana Pena (Vistiting INTUITION placement), University of Salford
  • Alessio Murgia, University of Readig
  • Will Steptoe & Wole Oyekoya, UCL
  • Estefania Guimaraes, Roehampton

Commercial Contributors

  • Electrosonic, Visual Acuity, Avanti  Screen Media, SGI

Contribution of University of Salford

Eye-catching was a collaborative project. The contribution of The University of Salford was project co-ordination, development of the core software of EyeCVE, initial integration of eye-tracking with stereo glasses, assisting other partners in developing and integrating software and hardware and designing experiments, and leading the experiments that measured ability to interpret gaze, the comparisons between EyeCVE and video conferencing, and the real time performance of both approaches.

Sponsors

We wish to thank the sponsors The EPSRC (Grant  EP/E007406/1) INTUITION