Saturday, March 17, 2007

Kent Presentation

Quick plug.

I’ve been invited to give a presentation to the Centre of Cognitive Neuroscience and Cognitive Systems at the University of Kent next Wednesday (21st March, 2007). I’ll be presenting the results from my eye movement and film study that I discussed at SCMS and Madison but with the focus shifted to the science. If you happen to be a University of Kent student or in the neighbourhood you should com and check it out (details).

The following morning I’m giving a guest lecture as part of Murray Smith’s Cognition and Emotion in Film course. This sounds like a fantastic course. I wish there had been a similar course when I was an undergraduate. Murray Smith is a very active member of the cognitive film theory community and particularly the Society for Cognitive Studies of Moving-Images. His work on emotion and empathy in film viewing is very influential.

I’m looking forward ton bouncing ideas around with his students.

Seeing Spots

Thanks to David Bordwell for his very flattering comments about my presentation on his blog. David, and the rest of the Madison Communication Arts department were incredibly welcoming when I presented there last Monday (12th March) and the session was an absolute pleasure.

Now that David has alluded to my findings and given a brief description of my presentation you’re probably interested in finding out more. Sadly, I’m going to have to ask you to watch this space just a while longer. As is the way in academia, the publication of academic papers takes a long time and the paper that describes my findings is not yet ready for public distribution…..I know, I know: I’m a big tease. I promise you it will be worth the wait and I’ll publish the paper on my blog as soon as it is available.

In the meantime I can give you a glimpse of the “little yellow dots” David refers to.

This image is a screengrab of a software tool I created called (rather cheesily) Gazeatron. The tool allows me to plot the gaze position of multiple viewers on to the video they were viewing. The image above illustrates where 17 people were looking during this frame of the film (the yellow spots were viewers who could hear the audio and the pink were viewers who could not). Gazeatron allows you to see the same data in real-time as the video plays. By observing the swarming behaviour of the gaze positions whilst multiple viewers watch a film you gain an incredibly detailed insight into the viewing experience. Gazeatron also provides automated analysis for features of the eye movements to provide objective measures in supplement to the subjective observations.

Existing eye tracking tools do not allow you to analyse film viewing in this way and, I would argue reducing viewer attention to a film to static screenshots or data points does not give you a feel for the dynamics of the viewing experience. I’ll work on posting a video of Gazeatron so you can all see what I mean.

A bit of background on eye tracking. Each spot in the image above represents the point where a single viewer is looking. This is important as it tells us, roughly the part of the visual field they are attending to and, therefore processing at any moment in time. You may think you are aware of the whole visual field but in fact you are only able to process a very small portion to a high degree of accuracy at any one time. When you want to process a new part of the visual field you shift your eyes (perform a saccadic eye movement) so that the light from the new target is projected into the region of highest sensitivity in the eye, referred to as the fovea. These saccadic eye movements are very quick and we are not aware of them as our brains “stitch” the images either side together to create the impression of a stable visual world. By recording these eye movements we can infer the moment-by-moment experience of a viewer.

Eye movements can be recorded using a technique known as Eye tracking. There a variety of ways to track somebody’s eyes such as sclera coil and dual-purkinje (some clearly more scary than others). The most common technique used today, and the one I use is Corneal-Reflection. These trackers shine infrared lights onto the eye and film the reflected image using an infrared camera. By locating the iris and the infrared light reflected off the cornea the gaze of the viewer can be calculated. The gaze is simply a vector pointing out from the viewer’s eye into space. Therefore, eye trackers can be used to tell us where people are looking on a computer screen, table top, real-world interaction or….whilst watching a film.

The eye trackers I use are the Eyelink II head-mounted and the Eyelink 1000 tower mounted tracker, both from SR Research. These trackers are located around the University of Edinburgh, mostly in John Henderson’s lab which I am part of.

Tracking a viewer’s eyes whilst they watch a film is not as simple as you might think. The Eyelink trackers all come with software that allows you to present videos but they do not, currently have accompanying tools for analysing the eye movement data in the way I’ve described above. Most other trackers do not provide assistance in presenting films and a lot of previous researchers have resorted to tracking viewers using a head-mounted real-world tracker and recording a video to see what they are looking at (a similar technique is used in driving studies). The only other tracker I have used that is suitable for presenting films is Tobii. This system is incredibly easy to use as it is focussed at usability studies and as a hands-free interface for disabled users. The Tobii eye trackers are incredibly well designed but their price, ~£17,000 puts them out of the reach of most users (the price issue is the same with all eye trackers). Their accuracy is also not as good as the Eyelink systems which is why most vision researchers don’t use them.

If you’re looking for a cheaper option there is the option of building your own eye tracker. Derrick Parkhurst has developed open-source software and instructions for how to construct the necessary hardware to build your own eye tracker. The openEyes project is a great idea although I’m yet to have a go. If you have a go, best of luck and tell me how it goes.

If anybody has any further questions about eye tracking and film please either post a comment below or e-mail me.

As for what I have found by eye tracking film viewers well……that’ll still have to wait. Sorry. For the time being I hope you enjoy the picture of little yellow and pink spots. Who’d have thought seeing spots could be so useful!

Tuesday, March 13, 2007

SCMS and State Street Brats

I am writing this blog post from a hotel room in Madison, Wisconsin (note: the picture above is not the hotel I’m staying in. That’s the Chicago Hilton; more later). Why am I in Madison? Other than the fact they have great frozen custard, Afghan, Mexican food and hotdogs (, I am here to present my eye tracking research to the Department of Communication Arts at the University of Wisconsin-Madison. If you are at all interested in Cognitive Film Theory you will be aware of Madison’s Comm. Arts department. They began teaching cognitive film theory before most of us were even aware that such an approach existed. People like Ben Singer, Jonathan Frome (now of Uni. Central Florida) and, of course the hugely influential David Bordwell (yes that guy who’s books you are always being told to read by your film teacher/lecturer/film geek friend/me). The department is so infused by cognitivist ideas that it was an absolute pleasure to present my research there. The intense discussion that my presentation invoked was incredibly invigorating. Its great to get such an intrigued and welcome reception of, what is essentially cognitive science research (although, research focussed on film). I look forward to many future exchanges with this group. I’m probably not supposed to mention this as I’m not sure if the details are finalised but, David Bordwell and the Department of Communication Arts will be hosting the Society for Cognitive Studies of the Moving-Images conference next Summer (2008) so I shall be eagerly returning to Madison next year. I also want to encourage as many other people interested in issues related to cognition and film, whether from arts, humanities, social sciences, sciences, or anywhere to attend this conference. It is going to be a hoot J.

Thanks to The Comm. Arts department for being so incredible hospitable, special thanks to Jeff Smith for being my guide and David Bordwell for being so receptive to my imposition.

Now some background: the reason why this visit to Madison was possible was because I was presenting at the Society of Cinema and Media Studies (SCMS) conference in Chicago last week. The conference is one of the foremost gatherings of Cinema, Television, and Media theorists from around the world. I attended two years ago when it was held in London and my appetite was whetted. This year I presented a paper about my recent eye tracking examinations of different editing techniques across a range of films. The paper was really well received and a lot of people were very intrigued by the potential for eye tracking as a tool in their analysis of film. The technology is getting close to the point that most researchers would be able to perform the kind of analysis I do on films. However, there are a couple of key components such as analysis and visualisation tools that are currently missing from most commercial eye tracking systems that would be required for the technique to really take off. I have developed my own tools that allow me to fill these blanks but most researchers would not be able to do this. And, of course the cost of most easy to use eye tracking systems is still prohibitive.

Who knows, maybe the technology will suddenly take both a cost and technological leap forward and it’ll become accessible to all. Watch this space……

Returning to SCMS, the conference was held in the Chicago Hilton (very swish….well the lobby is anyway; the conference presentation rooms/bedrooms are a tad odd). I was a complete conference geek, attending almost every session. Considering that the days ran 8:15am-8pm this is quite an achievement. The reason I attended so many sessions was because of the incredible range of interesting presentations. Everything from a bit of Cognitive Film Theory (Jonathan Frome, Joe Kickasola, Mark Minett), masses on New Media, Interactive Media, and Videogames, emotions and film, including discussion of automatic facial expression recognition (Kelly Gates), and even a presentation on the Queering of Kevin Smith (it doesn’t take much ;)…Carter Soles). This year there seemed to be a lot of presentations on the impact on-line distribution, web video, and interactive TV and media such as videogames were having on our classical theories of film and television. Fascinating stuff. One of my most satisfying panels was debating the implications of interfaces for interactive TV content e.g. TIVO and PVRs, and their effect on our relationship to the film/TV content. Does the interface, which is meant to empower the viewer by allowing them access to the content actually compete with the content itself?

So, all-in-all a great conference and trip to Madison. I look forward to coming back in 2008.