Wednesday, December 14, 2011

SCSMI 2012: where Kuleshov is King!

This is a rather late notice to anybody interested in Cognitive Film Theory/Film Cognition. The annual meeting of the Society for the Cognitive Studies of the Moving Images (SCSMI) will be hosted by Sarah Lawrence College and New York University next June 13-16th in New York and the proposal deadline is TOMORROW, December 15th 2011.

This is my favourite conference for a variety of reasons:

1) It is the only place where film makers, theorists, psychologists, cognitive scientists, neuroscientists and anybody else interested in film come together to present their work, exchange ideas and contribute to the international community of film cognition.

2) All of the big names in the field attend and the small scale of the conference means that you will get the opportunity to meet them in person and exchange ideas.

3) The presentation format is incredibly generous: 25 minute presentations + 20 mins questions! Traditionally there have never been more than 3 parallel strands. This is incredibly conducive to open, sprawling and exciting discussions.

4) The society is full of generally lovely people and we always have a great time.

If you are interested in attending the deadline is TOMORROW, December 15th 2011. Details of how to submit are available here: http://scsmi-online.org/conference

Hope to see lots of the old faces and many, many new faces in New York next June.



Wednesday, October 12, 2011

Smooth Pursuit on BBC Breakfast

BBC Breakfast contacted me yesterday about my eye tracking and film research. They were putting together a piece about how we can use eye tracking (and other methods such as EEG) to monitor people's experience of entertainment such as film and literature. I didn't have much input to the piece but they did start with a clip of my There Will Be Blood study. The original report of that study is available here:http://www.davidbordwell.net/blog/2011/02/14/watching-you-watch-there-will-be-blood/

The BBC Breakfast pieces can me seen here (it starts 15:22s):

http://www.bbc.co.uk/news/12550706

The clip they chose to feature was not included in my earlier discussion. It is very interesting from the eye movement perspective because it shows how our attention picks out motion and tracks it even when it is momentarily out of view. For instance, when the shot begins the viewers do not know what they are looking at so their gaze defaults to screen centre and the vanishing point of the train tracks.



The sound of a car's engine and the slight motion in the distance captures viewer attention and everybody looks in the same place.


As the camera pans right the viewers continue tracking the car. Because the car and the camera are both moving this means that the viewer's eyes may actually be stationary on the screen but they perceive the car as moving. In the real-world we would either pursue the car by rotating our eyes with it or rotate our head to keep the car at the centre of our gaze without moving our eyes. The pan in this example serves the same purpose as a head rotation.


As the car disappears behind the building the fact that the camera continues to move with the car and we hear the car's engine implies that it will reappear. Viewers try to find the car by saccading to the screen edge in anticipation of the car's reappearance.


Whenever a door or window appears in the building all viewers zoom in on it trying to catch a glimpse of the car. This high degree of attentional synchrony is expressed by the heatmap that appears periodically during the clip. If you want to know more about how this heatmap is calculated look at our paper:

Mital, P.K., Smith, T. J., Hill, R. and Henderson, J. M. (2011) Clustering of gaze during dynamic scene viewing is predicted by motion. Cognitive Computation, 3(1), 5-24 http://www.bbk.ac.uk/psychology/our-staff/academic/tim-smith/documents/Clustering_of_Gaze_During_Dynamic_Scene_Viewing_is_Predicted.pdf


The building ends and all viewers saccade to the edge that they believe the car will appear. The continued engine sounds reinforces our belief that it continues to exist and as the camera pans left and stops tracking we wait patiently. When the car finally appears our diligence as viewers is rewarded.

That one gracious shot demonstrates how a combination of slow, deliberate camera movements and choreography of action can lead to incredibly focussed viewing and induce viewers to perceive actions and details, such as the car's existence and motion behind the building, without actually showing them. Compare this shot to the long-take in my previous analysis to see how two completely different shots can lead to similar coordination of viewer gaze. Masterful!

So, that was what was going on in that brief clip from BBC Breakfast. Now all I need is to get on the couch with Sian and Bill!

Monday, September 05, 2011

ECEM 2011

August 21-25th 2011 I had the great fortune of attending the European Conference on Eye Movements (ECEM) in Marseille, France. This conference occurs every two years and I having attended it since 2003 I can say that this year was a roaring success. The quality of the presentations was second to none, the atmosphere was fun and sociable and Marseille is fantastically charismatic city in which to hold a conference. I presented two pieces of work. On the Monday I presented a poster on using eyetracking to inform film theory (see below or pdf here) which was very well received and gave me plenty of opportunity to ramble on about Von Trier, Welles, Intensified Continuity and the Society for Cognitive Studies of Moving Images (SCSMI). The poster was presented outside in a beautifully tree-line courtyard at the University of Provence, Marseille. Unfortunately, on the day of presentation the wind had whipped up and I spent most of my presentation holding on to my poster as it attempted to fly away. The result was rather comical for the audience but, the trooper that I am, I did not let it phase me and continued as enthusiastically as always. 

If you are interested in reading more about the work presented in the poster you can see details of the cognitive readings of film in my Edit Blindness paper (Smith & Henderson, 2008; JEMR; here) and my modelling paper with Parag Mital (Mital, Smith, Hill & Henderson, 2011; Cog. Comp; here). The cinematic feature analysis will appear as a chapter in the book Psychocinematics published next year.

The second presentation I gave at ECEM was in a symposium on The Perception of Dynamic Scenes which I organised. This was a very successful symposium which turned out even better than I had hoped. Michael Dorr (Schepens Eye Hospital), Halszka Jarodzka (Heerlen), Sam Wass (Birkbeck), Daniel Richardson (UCL), and Ben Tatler (Dundee) all contributed to the symposium and presented a wonderful variety of perspectives of how we attend to and perceive complex, dynamic scenes. There was a real buzz about the potential for investigating dynamic scene perception at ECEM which culminated with the symposium. Over the last few ECEM there has been a growing interest in studying eye movements on video but technical, methodological, and theoretical limitations meant that very few people attempted it. Since the last ECEM in Southampton two years ago many labs have overcome these issues through significant innovations such as those provided by the DIEM project. We are now at an exciting tipping point which I am certain will lead to a sudden upsurge of empirical investigations of visual cognition with more naturalistic, dynamic stimuli. The potential insights into human behaviour and cognition this will provide are unlimited and I am looking forward to seeing the area develop and assisting it in anyway I can.

Thanks again for all the organisers of ECEM 2011, Marseille including Francoise Vitu and her helpers and I am looking forward to ECEM 2013 in Lund, Sweden!


Sunday, July 24, 2011

Hollywood calling?

A couple of months ago, I got an unexpected email with the subject Would you like to speak at DreamWorks Animation”. Naturally, I initially considered it a scam and replied cautiously. Their response left no doubt that it was indeed Hollywood calling.

The idea that a cognitive psychologist from London might be of interest to a Hollywood animation studio might seem odd. What the filmmakers were interested in was my recent work on how we attend to and perceive the real world.

My research usually involves showing volunteers simple patterns or photographs on a computer screen and recording how they move their eyes when trying to make sense of the image. This provides me with insights into how someone uses their eyes to sample the bits of the visual world they are interested in, stitch the details together and store them in memory. In the last few years I began applying the same methods to investigate how we watch film after the realisation that we use the same cognitive processes to watch film as we use to look at the real-world.

Films present an artificial world across a series of camera shots edited together so that only the bits important for the narrative are presented. When Shrek leaves his swamp to find Princess Fiona, for example, we only need to see him leave his house and then arrive at the Fairytale castle to work out the journey that must have happened in between. Films can create fantastical events and spaces that we can comprehend without any effort, as long as the film is edited correctly. But what distinguishes a good from a bad edit? In my research I have been trying to understand the Psychology of film viewing to answer this question.

Like all Hollywood film studios, Dreamworks Animation want to make their films as enjoyable and as effortless to watch as possible. Confusing films don't lead to big box office receipts, especially not when the film is intended for children.

At every moment during a film, the director needs to know exactly where the viewer is looking, how they are understanding the story and what they are feeling- this is no trivial task. If an edit occurs at the “wrong” time during an action sequence or cuts to the “wrong” camera position, the viewer can become disorientated and confused. In film terms, the cut is said to create a “discontinuity”. Over the 116 years cinema has existed, a suite of heuristics (rules-of-thumb) have evolved that filmmakers can use to help them avoid bad edits. These rules of continuity editing suggest, for example, that when filming a scene with two actors in a conversation, all shots of the action should be filmed from the same side of an imaginary line connecting the two actors. This 180 degree rule (named because the cameras will map out a 180 degree arc around the actors) ensures that the actors don't suddenly reverse direction on the screen across a cut and appear to be facing away from each other. Virtually all film and TV is constructed according to these rules. Watch a scene from any TV show and you will see how the cameras always stay on one side of the action.

While the continuity rules are believed to work by filmmakers the world over nobody understands why they work. Dreamworks Animation wanted to know if my experiments in film viewing could shed any light on this question.

By recording the eye movements of viewers as they watch film sequences I have been able to see which cinematic techniques succeed in guiding the viewers to the point of interest in a scene and whether a cut leads to disorientation. For example, if the action of a scene is easy to follow all viewers will watch the scene in the same way, leading to a clustering in the location of their gaze on the screen.


Puss In Boots teaser trailer with gaze location of 16 viewers from Tim J. Smith on Vimeo.

To get a sense of this gaze clustering watch this trailer for Dreamworks Animations upcoming Puss In Boots (http://vimeo.com/25033301). The gaze locations of 16 viewers are each represented as a dot and a hotspot overlaid on to the video. As the gaze of multiple people clusters together the colours become hotter. Notice how the gaze is clustered on Puss throughout the clip without taking in much of the background. This clip also uses the continuity editing rules to ensure that viewers shift their attention seamlessly across a cut. When Puss tosses his hat off the screen the cut is made right after the hat starts flying. The next shot continues the hat’s motion until caught by an enamoured admirer. Such a cut is referred to as a “match on action”. By using the sudden onset of motion to capture viewer attention and lead the eyes across the cut, the director ensures that the viewer perceives the two shots as being continuous. You can see this in the smooth shift in eye movements from one shot to the next.

At Dreamworks, I was struck with how intimately they engaged with the issues I was presenting. Their day-by-day concerns are with the minutae of film, the nuanced animation of a facial expression, the placement of characters across a cut, the correct lighting to pick out the main character. To make these decisions, though, they have to imagine themselves as their eventual viewer and, until now, they have had no way of knowing what was going in the mind of these viewers. I hope that combining some of the methods and theories from cognitive psychology with their own insights about film, they will get a clearer glimpse of this insight. I believe that studying the psychology of film will help filmmakers continue to improve upon the kinds of unique, exciting, and moving experiences that enraptured me when I was a kid and continue to fascinate us all today.

Monday, July 18, 2011

PostDoc in Neurocinematics

http://www.aalto.fi/en/current/jobs/teaching_and_research/postdoctoral_researcher_and_doctoral_student-school_of_art_and_design-neurocine/

Aalto University School of Art and Design, Department of Motion Picture, Television and Production Design is looking for two team members to conduct research in neurocinematics:

A post-doctoral researcher (Cognitive neuroscience) AND a doctoral student (Cinema studies)

1) POST DOC EXPERIENCED IN COGNITIVE NEUROSCIENCE AND fMRI. The applicant will work together with PI, doctoral student, and associated neuroscientists at the aivoAALTO. She or he is expected to independently collect and analyze functional magnetic resonance imaging (fMRI) data. Experience in fMRI is required, and experience in magnetoencephalography (MEG), electroencephalography (EEG) and physiological measures (e.g. eye tracking) are appreciated. A degree in neuroscience, medical sciences, engineering, mathematics, physics, or equivalent is requested. An emphasis is on the scientific writing skills, and applicants are expected to present a selection of first author publications.

2) DOCTORAL STUDENT IN FILM STUDIES: The doctoral student must have a Master’s degree in film or media studies, or equivalent, with explicit interest in psychology and cognitive neuroscience, and possess a post-graduate study place in a University. PI, Doctor of Arts Pia Tikka will supervise the thesis work in neurocinematics in collaboration with neuroscientists at the aivoAALTO.

For both positions, applicants' research potential and co-operation skills will be given particular emphasis during the selection process. Application materials should include a cover letter, a curriculum vitae, a complete list of publications, and 1 page description of future research interests related to neurocinematics. In addition, reprints of publications (max 2) and reference letters (max 2) are appreciated.

Both the positions are available for two years starting on 1st September, 2011. The salary is determined by the salary system of Aalto University.

The applications are to be submitted to the Registry of Aalto University, preferably on a single pdf-file by email no later than on August 16, 2011. The right to extend the search or not to fill a position is reserved.

Note, all single pdf-files should contain applicants last name: “NeuroCine_LASTNAME_otherinfo”. In addition, the email should be named “Application: NeuroCine”.

The email address of the registry is rekry-taik@aalto.fi Applications can also be sent via mail to: Aalto University School of Art and Design, Registry, P.O. Box 31000, FI‐00076 Aalto, Finland (visiting address Hämeentie 135 C, 00560 Helsinki). The registry closes at 3.00 p.m. The application documents will not be returned.

Please feel free to direct further questions to:

Pia Tikka, Ph.D.
aivoAALTO research project
Aalto University, Helsinki, Finland
- Dept. of Motion Picture, Television and Production Design
US online phone + 1 213 785 7048
Finland mobile phone +358 50 347 7432
Skype: piatikka
e-mail: pia.tikka@aalto.fi

Due to the summer holidays in July and August phone or skype inquiries are only taken on Friday’s between 12-4 pm Finnish time (GTM +2).

Neural signature of The Uncanny valley?

An fMRI study by a group from UC San Diego led by Ayse Pinar Saygin has investigated the phenomenon known as "The Uncanny Valley": that eerie feeling you get when watching a robot or CG animation that is attempting to be photorealistic but fails. we have all experienced this phenomenon when watching films such as Final Fantasy, or Robert Zemeckis' performance capture pieces, The Polar Express, Beowulf, A Christmas Carol and the recent film that sounded the death-knell for Zemeckis' studio, Mars Needs Moms. Given the popularity of using performance capture and even facial expression capture in films like Avatar and in the upcoming Tintin, there is a lot of interest in how to present naturalistic motion capture animation without wandering into the Uncanny Valley. The results of this new study seem to suggest that the discomfort comes from the perceptual mismatch between the authentic human motion and the inadequate appearance. We are highly sensitive to human appearance and motion and have more brain areas dedicated to the processing of these features than any other visual category. Motion capture gives us the ability to trick the brain into seeing human motion even in the absence of the corresponding appearance. This is clearly displayed in point-light walkers (check out this fun interactive demo http://www.biomotionlab.ca/Demos/BMLwalker.html). However, when the authentic biological motion is combined with an appearance that doesn't match the authenticity of the motion it results in cognitive dissonance. This new study shows the brains response in such a situation.

I am very intrigued to see how Steven Spielberg and Peter Jackson deal with this issue in the upcoming Tintin movies. In the first teaser trailer he avoided showing faces, perhaps to avoid the audience's negative response to seeing an Uncanny Tintin. In the first full trailer (below) the faces walk an interesting line between naturalism and cartoon, very accurately capturing the character of the original Herge cartoon. Perhaps Spielberg has dodged the bullet by mismatching the motion and appearance enough to avoid cognitive dissonance. We'll have to wait until Christmas to find out.




From the press release:

"[Image: Brain response as measured by fMRI to videos of a robot, android and human]

Your Brain on Androids

July 14, 2011
By Inga Kiderra

Ever get the heebie-jeebies at a wax museum? Feel uneasy with an anthropomorphic robot? What about playing a video game or watching an animated movie, where the human characters are pretty realistic but just not quite right and maybe a bit creepy? If yes, then you’ve probably been a visitor to what’s called the “uncanny valley.”

T! he phenomenon has been described anecdotally for years, but how and why this happens is still a subject of debate in robotics, computer graphics and neuroscience. Now an international team of researchers, led by Ayse Pinar Saygin of the University of California, San Diego, has taken a peek inside the brains of people viewing videos of an uncanny android (compared to videos of a human and a robot-looking robot).

Published in the Oxford University Press journal Social Cognitive and Affective Neuroscience, the functional MRI study suggests that what may be going on is due to a perceptual mismatch between appearance and motion.

The term “uncanny valley” refers to an artificial agent’s drop in likeability when it becomes too humanlike. People respond positively to an agent that shares some characteristics with humans – think dolls, cartoon animals, R2D2. As the agent becomes more human-like, it becomes more likeable. But at some point! that upward trajectory stops and instead the agent is perceived as strange and disconcerting. Many viewers, for example, find the characters in the animated film “Polar Express” to be off-putting. And most modern androids, including the Japanese Repliee Q2 used in the study here, are also thought to fall into the uncanny valley.

Saygin and her colleagues set out to discover if what they call the “action perception system” in the human brain is tuned more to human appearance or human motion, with the general goal, they write, “of identifying the functional properties of brain systems that allow us to understand others’ body movements and actions.”

They tested 20 subjects aged 20 to 36 who had no experience working with robots and hadn’t spent time in Japan, where there’s potentially more cultural exposure to and acceptance of androids, or even had friends or family from Japan.

The subjects were shown 12 videos of Repliee Q2 performing such o! rdinary actions as waving, nodding, taking a drink of water and picking up a piece of paper from a table. They were also shown videos of the same actions performed by the human on whom the android was modeled and by a stripped version of the android – skinned to its underlying metal joints and wiring, revealing its mechanics until it could no longer be mistaken for a human. That is, they set up three conditions: a human with biological appearance and movement; a robot with mechanical appearance and mechanical motion; and a human-seeming agent with the exact same mechanical movement as the robot.

At the start of the experiment, the subjects were shown each of the videos outside the fMRI scanner and were informed about which was a robot and which human.

The biggest difference in brain response the researchers noticed was during the android condition – in the parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain! s visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons (neurons also known as “monkey-see, monkey-do neurons” or “empathy neurons”).

According to their interpretation of the fMRI results, the researchers say they saw, in essence, evidence of mismatch. The brain “lit up” when the human-like appearance of the android and its robotic motion “didn’t compute.”

“The brain doesn’t seem tuned to care about either biological appearance or biological motion per se,” said Saygin, an assistant professor of cognitive science at UC San Diego and alumna of the same department. “What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent.”

In other words, if it looks human and moves likes a human, we are OK with that. If it looks like a robot and acts like a robot, we are OK with that, too; our brains have no difficulty processin! g the information. The trouble arises when – contrary to a lifetime of expectations – appearance and motion are at odds.

“As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners,” the researchers write. “Or perhaps, we will decide it is not a good idea to make them so closely in our image after all.”

Saygin thinks it’s “not so crazy to suggest we brain-test-drive robots or animated characters before spending millions of dollars on their development.”

It’s not too practical, though, to do these test-drives in expensive and hard-to-come-by fMRI scanners. So Saygin and her students are currently on the hunt for an analogous EEG signal. EEG technology is cheap enough that the electrode caps are being developed for home use.

The research was funded by the Kavli Institute for Brain and Mind at UC San Diego. Saygin was additionally supported by the Cal! ifornia Institute of Telecommunication and Information Technology (Calit2) at UCSD.

Saygin’s coauthors are Thierry Chaminade of Mediterranean Institute for Cognitive Neuroscience, France; Hiroshi Ishiguro of Osaka University and ATR, Japan; Jon Driver of University College London; and Chris Firth of University of Aarhus, Denmark.

Media Contact: Inga Kiderra, 858-822-0661, or ikiderra@ucsd.edu"

Wednesday, July 06, 2011

What is the point of Art? A view from neuroscience

In conjunction with Jericho House (http://www.jerichohouse.org.uk/) there will be a talk by Prof Colin Blakemore, FRS:

What's the Point of Art? A view from neuroscience.

Time: 7-8pm

Date: 20th July 2011

Location: Lecture theatre, 33 Queen Square

There will be a discussion after the talk, led by Prof Geraint Rees.

Prof Sophie Scott will chair the event.

The talk is open to all. Please register by sending an email to sophie.scott@ucl.ac.uk

What's The Point of Art?

In the aftermath of the recent cuts arguments about arts funding are becoming increasingly heated, yet crucial discussions as to the value and place of art in our world are distinguished by their absence. As the wider society experiences the kind of structural economic changes unseen in the UK for sixty years, the time has never been riper for a serious investigation of the role of art in our lives, and of its relationship with the individual, the state and the market.

For 2011, in collaboration with University College London, we are developing a sequence of six events on the theme ‘what is the point of art?’, each featuring a single speaker, to take place in London on dates throughout the year. The aim is to have accumulated by the end a compelling portfolio of perspectives on the value of art in our society.

Tuesday, July 05, 2011

The Illusion of Continuity, Berliner and Cohen (2011)

How we perceive spatiotemporal continuity across edited sequences of film is clearly a major interest of mine (hence the blog name!). Several theorists have written about this topic going all the way back to Munsterberg (1916), Hochberg & Brooks in the 70s and early 80s, and the pioneering Change Blindness studies of Levin & Simons in the 90s. In my thesis I discussed in depth the paradox of perceiving continuous visual scenes across visually discontinuous shots. For instance, why is a series of shots all filmed from the same side of an action such as a conversation perceived as being spatially continuous where as a shot that crosses to the other side of the action leads to spatial confusion? This filming and editing convention is known as the 180 Degree Rule. You can see clear demonstrations of it in this video:





Now a recent paper on the perception of continuity in film by Berliner and Cohen (2011) wonderfully brings together psychological evidence and film theory to provide an accessible and insightful overview of the topic. I highly recommend this article if you are looking for a quick reader on continuity perception.

Berliner and Cohen (2011) clearly outline the psychological motivation for the continuity editing rules but also acknowledge the role of intuition and creativity of filmmakers in choosing the right techniques to convey their particular story.

"Continuity conventions have remained relatively stable for about ninety years. The primary reason for their stability is not, as some scholars think, Hollywood’s marketing dominance or other externalities but rather that the early filmmakers who first developed the conventions were guided by their intuitive understanding of space perception and the reactions of cinema spectators. Just as expert pool players learn— not through direct study but intuitively, through trial and error—the principles of Newtonian physics that govern pool playing, as well as matter and energy generally, the filmmakers in the early twentieth century who first developed the conventions of the classical editing system, without directly studying psychology, discovered the structure of human perception."

Berliner, T. & Cohen, D. J. (2011) The Illusion of Continuity: Active Perception and the Classical Editing System.”Journal of Film and Video 63.1: 44-63.

Monday, June 13, 2011

Interview on CBC radio Canada

I had a great conversation with Michael Bhardwaj, the science correspondent for CBC radio Canada last week about eyetracking and film, the There Will Be Blood research, and my visit to Dreamworks Animation. Michael did a great job of describing my work on his radio show last Friday (10/06/11) and was kind enough to send me an mp3.

Enjoy



Monday, June 06, 2011

Guardian Science podcast

I was very fortunate to have the opportunity to discuss my research with Alok Jha on the Guardian Science Weekly podcast. If you would like to hear my views on visual attention, eye movements, the film experience and how it relates to magic check out the podcast:


If you are interested in learning more about my research you can find my publications and links to my research projects on my website.

Thursday, February 24, 2011

DIEM and thanks for all the hits

My guest post on David Bordwell's blog last week was a roaring success. I could never have imagined that it would capture the interest of so many people across so many disciplines. You can get a sense of the interest by looking at the comments and statistics for the main eye movement video. In the week the post has been up it has been viewed 145,000 times and pages embedding the video have been read 728,000 times! The video cropped up on twitter (thanks Roger Ebert and others), facebook, numerous blogs, websites and newspapers. I am incredibly happy that my research reached out to film makers, theorists, and eager consumers to inform their appreciation of film. Hopefully, you can all now get a sense of how miraculous and complex our perception of film is and how we can inform our understanding by applying methods from empirical psychology.

I plan to build on the momentum created by the blog post by posting similar cognitive readings of films here on my own blog. In the meantime, I can point you to my existing publications on the topic:

For information on the Dynamic Images and Eye Movement project (DIEM) and its analysis of the influence of visual and cinematic features on how we watch movies as presented in my analysis of There Will Be Blood, check out:
Mital, P.K., Smith, T. J., Hill, R. and Henderson, J. M. (in press) Clustering of gaze during dynamic scene viewing is predicted by motion. Cognitive Computation

On how we perceive film and the issues related to continuity, read: Smith, T.J. (2010) Film (Cinema) Perception. In E.B. Goldstein (ed.)The Sage Encyclopedia of Perception.

On the illusion of the "invisible edit" and how it relates to natural attentional shifts when watching film, see: Smith, T.J. and Henderson, J.M. (2008). Edit Blindness: The relationship between attention and global change blindness in dynamic scenes. Journal of Eye Movement Research, 2(2):6, 1-17.

Finally, if you want to see more of the DIEM eye movement videos, new videos as they are created and download the analysis software (i.e. CARPE) go to the DIEM project page and subscribe to our Vimeo channel. As a taster, here is a showreel from the DIEM videos. Enjoy!