Home Banner

Autonomous Systems and Robotics research centre

Supporting Communicational Eye Gaze across a distance

Eye catching was an EPSRC funded project that sought to communicate eye gaze across a distance through video and computer graphic technology. The partners were University of Salford, UCL, University of Reading, Univesity of Roehampton, Electrosonic, Visual Acuity, Avinti Screen Media and SGI.

Summary

Environmental, economic and social pressures push for a reduction in dependence on travel yet in many situations there is no substitute for a face-to-face meeting, despite advances in CSCW. Eye gaze is important in many interactions, for example in the building of trust and rapport, and in gauging focus of attention in conversational turn taking or when discussing shared artefacts. Conventional video conferencing does not attempt to support communicational eye gaze but approaches are emerging that appear to address this when reproducing a round the table experience across a distance. The project "Eye-catching" freed participants from their seats and allowed them to move around a fully shared space while still supporting communicational eye-gaze. This was achieved by integrating eye-tracking into a custom built immersive collaborative environment (EyeCVE), so that life sized avatars, which follow the movements and gaze of remote people, are projected into the local space. Our experiments gauged people's ability to discern what and who others were looking at, looked for signs of naturalness of eye-gaze behaviour, and measured task performance. Scenarios included a round the table interview, gauging what or who was looked at, and an object focussed puzzle. To compare our approach to video conferencing, we painstakingly arranged high definition cameras and large screens, carefully picking and adjusting lenses to provide optimal alignment across the remote spaces.

Eye gaze experimentation at the University of Salford 

Aim

Evaluate the role of eye-gaze in tele-communication so as to better design future communication technologies.

Objectives and how they were met

All the original objectives were met as we:

  • Built and studied the world's first tele-collaboration system that supports two and three way communicational eye-gaze without restricting the movements of participants, by extending three IPTs to support eye-gaze in an immersive collaborative virtual environment (iCVE) that uses 3D computer graphics to represent the activity of remote people within the surrounding workspace.
  • Established what conditions are necessary and sufficient to support communicational eye-gaze in a tele-communication system, through perception tests and eye-gaze recordings. Specifically we found that correct perspective view of an evenly lit representation of the face to be necessary and often sufficient. In some circumstances representation of eye movement was also necessary.
  • Validated the support of eye-gaze in tele-communication by measuring its impact on collaboration, using collaborative task experimentation and behavioural analysis. We found that typical eye gaze practices were evident in users of eye-gaze enabled iCVE, but were unable to reproduce these in video conferencing.
  • Measured the impact of the technology by comparing eye-gaze in the shared workspace of the extended iCVE to that of AccessGrid and carefully aligned high definition video conferencing configured to best support eye-gaze. We found gaze can only be accurately discerned across video conferencing when all participants are constrained, for example, by sitting upright in carefully positioned seats. In contrast, we showed that the free viewpoint nature of iCVE allows gaze to be discerned between multiple moving people in iCVE.
  • Establish, through interactional analysis, that eye-gaze is important in conversations when people are stood close enough to see the whites of the eyes and at least one party is moving their eyes without head.
  • Established that eye gaze was critical when discerning which of a number of objects placed between two users was being looked at, when the gazer used little head movement.

Key findings

  • Video conferencing faithfully communicates what someone looks like whereas Immersive Collaborative Virtual Environments faithfully represent what they are looking at.
  • Gaze can only be accurately discerned across video conferencing when all participants are constrained, for example, by sitting upright in carefully positioned seats. All people positioned in front of a video wall think they are being looked at when a remote person looks at the image of someone close to the camera and thus none realise that they are being looked at when their image on the remote wall is to one side of the camera.
  • Immersive Collaborative Virtual Environments allow gaze to be accurately interpreted, through avatar representation, while a physically distributed group move around a virtually shared space.
  • The addition of tracked eyes in avatars significantly improves people's ability to discern gaze in some but not all circumstances within Immersive Collaborative Virtual Environments.
  • Tracked eyes are critical in discerning gaze of a person who is not turning their head to look at an object but head tracking seems sufficient when people turn their heads to look.
  • Other factors, such as lighting, resolution and contrast, play a large role in accuracy of gaze estimation, both in video conferencing and Immersive Collaborative Virtual Environments.
  • While tracked eyes consistently improved task performance, the statistical difference was not significant.
  • Avatar gaze was found to follow standard gaze practices.
  • Communication of eye-gaze requires both spatial and temporal alignment of people's viewpoint and actions.

While both the end-to-end latency and update rate of EyeCVE could have been perceived, there is no evidence that they were. Those of the HD video conference were not perceivable but unlike EyeCVE this was not tested on a public network. Display and capture latencies in both systems were above that induced by the network and computation.

Eye tracking equipment 

Outputs

Key Note

  • Face to Face, David Roberts, ACM Multimedia, Montreal, Canada, 2009

Invited Presentations

  • EPSRC, David Roberts, Poster, People in Systems Day, London, 2009
  • Eye-catching, David Roberts, Reality Centre Special Interest Group, 2007

Publications

  • D. Roberts, R. Wolff, J. Rae, A. Steed, R. Aspin, M. McIntyre, A. Pena, O. Oyekoya, and W. Steptoe, Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together,  in IEEE Virtual Reality 2009, p.p. 135-142, Lafayette, USA, 2009.
  • Steptoe, W & Okekoya, O & Murgia, A & Wolff, R & Rae, J & Guimaraes, E & Roberts, D J & Steed, A 2009, Eye-Tracking for Avatar Eye-Gaze Control during Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments, in: 'IEEE VR 2009, Lafayette, USA, 2009.
  • R. Wolff, D. Roberts, Alessio Murgia, Norman Murray, John Rae, Will Steptoe, Anthony Steed, and Paul Sharkey, Communicating Eye Gaze across a Distance without Routing Participants to the Spot, Proc. 11th IEEE ACM Int. Symp. Distributed Simulation and Real Time Applications DSRT, p.p. 111-118, Vancouver, Canada, October 2008. - 2nd in best paper award and shortlisted for journal.
  • A, Murgia R. Wolff, W. Steptoe, P. Sharkey, A Tool for Analyzing and Replaying Gaze-Enhanced Collaborative Sessions in CAVE-like Environments, D. Roberts, and E . Guimaraes, In proc. 11th IEEE ACM Int. Symp. Distributed Simulation and Real Time Applications DSRT, p.p. 252-258, Vancouver Canada, October, 2008.
  • W. Steptoe, R. Wolff, A. Murgia, E. Guimaraes, J, Rae, P. Sharkey, D. Roberts, and A. Steed, Eye-Tracking for Avatar Eye-Gaze and Interactional Analysis in Immersive Collaborative Virtual Environments, In ACM Proc. Conference on Computer Supported Cooperative Work, San Diego, 2008, pp. 197-200.
  • N. Murray, D.  Roberts, A. Steed, P. Sharkey, P. Dickerson, J. Rae and R. Wolff, Eye Gaze in Virtual Environments: Evaluating the need and initial work on implementation, In Concurrency and Computation: Practice and Experience, Wiley Publishers, ISSN 1532-0626, 2008.
  • N. Murray, D. Roberts, A. Steed, P. Sharkey, John Ray, Paul Dickenson, An assessment of eye gaze potential within immersive virtual environments, In ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), Volume 3 , Issue 4 December 2007. 1-17
  • Murray, N. and Roberts, D., Comparison of head gaze and head and eye gaze within an immersive environment, The 10th IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications, 2006. Winner of the best paper prize.

Partners

  • University of Salford
  • University of Reading
  • UCL
  • University of Roehampton

Investigators

  • David Roberts, Project Leader and Principle Investigator at University of Slaford
  • Anthony Steed, Principle Investigator at University College London
  • Paul Sharky, Principle Investigator at University of Reading
  • John Rae, Principle Investigator at University of Roehampton
  • Paul Dickenson, Co-Investigator at University of Roehampton
  • Norman Murray, Co-Investigator at University of Salford

Researchers

  • Robin Wolff, Adriana Pena (Vistiting INTUITION placement), University of Salford
  • Alessio Murgia, University of Readig
  • Will Steptoe & Wole Oyekoya, UCL
  • Estefania Guimaraes, Roehampton

Commercial Contributors

  • Electrosonic, Visual Acuity, Avanti  Screen Media, SGI

Contribution of University of Salford

Eye-catching was a collaborative project. The contribution of The University of Salford was project co-ordination, development of the core software of EyeCVE, initial integration of eye-tracking with stereo glasses, assisting other partners in developing and integrating software and hardware and designing experiments, and leading the experiments that measured ability to interpret gaze, the comparisons between EyeCVE and video conferencing, and the real time performance of both approaches.

Sponsors

We wish to thank the sponsors The EPSRC (Grant  EP/E007406/1) INTUITION