Sunday, July 24, 2011

Hollywood calling?

A couple of months ago, I got an unexpected email with the subject Would you like to speak at DreamWorks Animation”. Naturally, I initially considered it a scam and replied cautiously. Their response left no doubt that it was indeed Hollywood calling.

The idea that a cognitive psychologist from London might be of interest to a Hollywood animation studio might seem odd. What the filmmakers were interested in was my recent work on how we attend to and perceive the real world.

My research usually involves showing volunteers simple patterns or photographs on a computer screen and recording how they move their eyes when trying to make sense of the image. This provides me with insights into how someone uses their eyes to sample the bits of the visual world they are interested in, stitch the details together and store them in memory. In the last few years I began applying the same methods to investigate how we watch film after the realisation that we use the same cognitive processes to watch film as we use to look at the real-world.

Films present an artificial world across a series of camera shots edited together so that only the bits important for the narrative are presented. When Shrek leaves his swamp to find Princess Fiona, for example, we only need to see him leave his house and then arrive at the Fairytale castle to work out the journey that must have happened in between. Films can create fantastical events and spaces that we can comprehend without any effort, as long as the film is edited correctly. But what distinguishes a good from a bad edit? In my research I have been trying to understand the Psychology of film viewing to answer this question.

Like all Hollywood film studios, Dreamworks Animation want to make their films as enjoyable and as effortless to watch as possible. Confusing films don't lead to big box office receipts, especially not when the film is intended for children.

At every moment during a film, the director needs to know exactly where the viewer is looking, how they are understanding the story and what they are feeling- this is no trivial task. If an edit occurs at the “wrong” time during an action sequence or cuts to the “wrong” camera position, the viewer can become disorientated and confused. In film terms, the cut is said to create a “discontinuity”. Over the 116 years cinema has existed, a suite of heuristics (rules-of-thumb) have evolved that filmmakers can use to help them avoid bad edits. These rules of continuity editing suggest, for example, that when filming a scene with two actors in a conversation, all shots of the action should be filmed from the same side of an imaginary line connecting the two actors. This 180 degree rule (named because the cameras will map out a 180 degree arc around the actors) ensures that the actors don't suddenly reverse direction on the screen across a cut and appear to be facing away from each other. Virtually all film and TV is constructed according to these rules. Watch a scene from any TV show and you will see how the cameras always stay on one side of the action.

While the continuity rules are believed to work by filmmakers the world over nobody understands why they work. Dreamworks Animation wanted to know if my experiments in film viewing could shed any light on this question.

By recording the eye movements of viewers as they watch film sequences I have been able to see which cinematic techniques succeed in guiding the viewers to the point of interest in a scene and whether a cut leads to disorientation. For example, if the action of a scene is easy to follow all viewers will watch the scene in the same way, leading to a clustering in the location of their gaze on the screen.


Puss In Boots teaser trailer with gaze location of 16 viewers from Tim J. Smith on Vimeo.

To get a sense of this gaze clustering watch this trailer for Dreamworks Animations upcoming Puss In Boots (http://vimeo.com/25033301). The gaze locations of 16 viewers are each represented as a dot and a hotspot overlaid on to the video. As the gaze of multiple people clusters together the colours become hotter. Notice how the gaze is clustered on Puss throughout the clip without taking in much of the background. This clip also uses the continuity editing rules to ensure that viewers shift their attention seamlessly across a cut. When Puss tosses his hat off the screen the cut is made right after the hat starts flying. The next shot continues the hat’s motion until caught by an enamoured admirer. Such a cut is referred to as a “match on action”. By using the sudden onset of motion to capture viewer attention and lead the eyes across the cut, the director ensures that the viewer perceives the two shots as being continuous. You can see this in the smooth shift in eye movements from one shot to the next.

At Dreamworks, I was struck with how intimately they engaged with the issues I was presenting. Their day-by-day concerns are with the minutae of film, the nuanced animation of a facial expression, the placement of characters across a cut, the correct lighting to pick out the main character. To make these decisions, though, they have to imagine themselves as their eventual viewer and, until now, they have had no way of knowing what was going in the mind of these viewers. I hope that combining some of the methods and theories from cognitive psychology with their own insights about film, they will get a clearer glimpse of this insight. I believe that studying the psychology of film will help filmmakers continue to improve upon the kinds of unique, exciting, and moving experiences that enraptured me when I was a kid and continue to fascinate us all today.

Monday, July 18, 2011

PostDoc in Neurocinematics

http://www.aalto.fi/en/current/jobs/teaching_and_research/postdoctoral_researcher_and_doctoral_student-school_of_art_and_design-neurocine/

Aalto University School of Art and Design, Department of Motion Picture, Television and Production Design is looking for two team members to conduct research in neurocinematics:

A post-doctoral researcher (Cognitive neuroscience) AND a doctoral student (Cinema studies)

1) POST DOC EXPERIENCED IN COGNITIVE NEUROSCIENCE AND fMRI. The applicant will work together with PI, doctoral student, and associated neuroscientists at the aivoAALTO. She or he is expected to independently collect and analyze functional magnetic resonance imaging (fMRI) data. Experience in fMRI is required, and experience in magnetoencephalography (MEG), electroencephalography (EEG) and physiological measures (e.g. eye tracking) are appreciated. A degree in neuroscience, medical sciences, engineering, mathematics, physics, or equivalent is requested. An emphasis is on the scientific writing skills, and applicants are expected to present a selection of first author publications.

2) DOCTORAL STUDENT IN FILM STUDIES: The doctoral student must have a Master’s degree in film or media studies, or equivalent, with explicit interest in psychology and cognitive neuroscience, and possess a post-graduate study place in a University. PI, Doctor of Arts Pia Tikka will supervise the thesis work in neurocinematics in collaboration with neuroscientists at the aivoAALTO.

For both positions, applicants' research potential and co-operation skills will be given particular emphasis during the selection process. Application materials should include a cover letter, a curriculum vitae, a complete list of publications, and 1 page description of future research interests related to neurocinematics. In addition, reprints of publications (max 2) and reference letters (max 2) are appreciated.

Both the positions are available for two years starting on 1st September, 2011. The salary is determined by the salary system of Aalto University.

The applications are to be submitted to the Registry of Aalto University, preferably on a single pdf-file by email no later than on August 16, 2011. The right to extend the search or not to fill a position is reserved.

Note, all single pdf-files should contain applicants last name: “NeuroCine_LASTNAME_otherinfo”. In addition, the email should be named “Application: NeuroCine”.

The email address of the registry is rekry-taik@aalto.fi Applications can also be sent via mail to: Aalto University School of Art and Design, Registry, P.O. Box 31000, FI‐00076 Aalto, Finland (visiting address Hämeentie 135 C, 00560 Helsinki). The registry closes at 3.00 p.m. The application documents will not be returned.

Please feel free to direct further questions to:

Pia Tikka, Ph.D.
aivoAALTO research project
Aalto University, Helsinki, Finland
- Dept. of Motion Picture, Television and Production Design
US online phone + 1 213 785 7048
Finland mobile phone +358 50 347 7432
Skype: piatikka
e-mail: pia.tikka@aalto.fi

Due to the summer holidays in July and August phone or skype inquiries are only taken on Friday’s between 12-4 pm Finnish time (GTM +2).

Neural signature of The Uncanny valley?

An fMRI study by a group from UC San Diego led by Ayse Pinar Saygin has investigated the phenomenon known as "The Uncanny Valley": that eerie feeling you get when watching a robot or CG animation that is attempting to be photorealistic but fails. we have all experienced this phenomenon when watching films such as Final Fantasy, or Robert Zemeckis' performance capture pieces, The Polar Express, Beowulf, A Christmas Carol and the recent film that sounded the death-knell for Zemeckis' studio, Mars Needs Moms. Given the popularity of using performance capture and even facial expression capture in films like Avatar and in the upcoming Tintin, there is a lot of interest in how to present naturalistic motion capture animation without wandering into the Uncanny Valley. The results of this new study seem to suggest that the discomfort comes from the perceptual mismatch between the authentic human motion and the inadequate appearance. We are highly sensitive to human appearance and motion and have more brain areas dedicated to the processing of these features than any other visual category. Motion capture gives us the ability to trick the brain into seeing human motion even in the absence of the corresponding appearance. This is clearly displayed in point-light walkers (check out this fun interactive demo http://www.biomotionlab.ca/Demos/BMLwalker.html). However, when the authentic biological motion is combined with an appearance that doesn't match the authenticity of the motion it results in cognitive dissonance. This new study shows the brains response in such a situation.

I am very intrigued to see how Steven Spielberg and Peter Jackson deal with this issue in the upcoming Tintin movies. In the first teaser trailer he avoided showing faces, perhaps to avoid the audience's negative response to seeing an Uncanny Tintin. In the first full trailer (below) the faces walk an interesting line between naturalism and cartoon, very accurately capturing the character of the original Herge cartoon. Perhaps Spielberg has dodged the bullet by mismatching the motion and appearance enough to avoid cognitive dissonance. We'll have to wait until Christmas to find out.




From the press release:

"[Image: Brain response as measured by fMRI to videos of a robot, android and human]

Your Brain on Androids

July 14, 2011
By Inga Kiderra

Ever get the heebie-jeebies at a wax museum? Feel uneasy with an anthropomorphic robot? What about playing a video game or watching an animated movie, where the human characters are pretty realistic but just not quite right and maybe a bit creepy? If yes, then you’ve probably been a visitor to what’s called the “uncanny valley.”

T! he phenomenon has been described anecdotally for years, but how and why this happens is still a subject of debate in robotics, computer graphics and neuroscience. Now an international team of researchers, led by Ayse Pinar Saygin of the University of California, San Diego, has taken a peek inside the brains of people viewing videos of an uncanny android (compared to videos of a human and a robot-looking robot).

Published in the Oxford University Press journal Social Cognitive and Affective Neuroscience, the functional MRI study suggests that what may be going on is due to a perceptual mismatch between appearance and motion.

The term “uncanny valley” refers to an artificial agent’s drop in likeability when it becomes too humanlike. People respond positively to an agent that shares some characteristics with humans – think dolls, cartoon animals, R2D2. As the agent becomes more human-like, it becomes more likeable. But at some point! that upward trajectory stops and instead the agent is perceived as strange and disconcerting. Many viewers, for example, find the characters in the animated film “Polar Express” to be off-putting. And most modern androids, including the Japanese Repliee Q2 used in the study here, are also thought to fall into the uncanny valley.

Saygin and her colleagues set out to discover if what they call the “action perception system” in the human brain is tuned more to human appearance or human motion, with the general goal, they write, “of identifying the functional properties of brain systems that allow us to understand others’ body movements and actions.”

They tested 20 subjects aged 20 to 36 who had no experience working with robots and hadn’t spent time in Japan, where there’s potentially more cultural exposure to and acceptance of androids, or even had friends or family from Japan.

The subjects were shown 12 videos of Repliee Q2 performing such o! rdinary actions as waving, nodding, taking a drink of water and picking up a piece of paper from a table. They were also shown videos of the same actions performed by the human on whom the android was modeled and by a stripped version of the android – skinned to its underlying metal joints and wiring, revealing its mechanics until it could no longer be mistaken for a human. That is, they set up three conditions: a human with biological appearance and movement; a robot with mechanical appearance and mechanical motion; and a human-seeming agent with the exact same mechanical movement as the robot.

At the start of the experiment, the subjects were shown each of the videos outside the fMRI scanner and were informed about which was a robot and which human.

The biggest difference in brain response the researchers noticed was during the android condition – in the parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain! s visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons (neurons also known as “monkey-see, monkey-do neurons” or “empathy neurons”).

According to their interpretation of the fMRI results, the researchers say they saw, in essence, evidence of mismatch. The brain “lit up” when the human-like appearance of the android and its robotic motion “didn’t compute.”

“The brain doesn’t seem tuned to care about either biological appearance or biological motion per se,” said Saygin, an assistant professor of cognitive science at UC San Diego and alumna of the same department. “What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent.”

In other words, if it looks human and moves likes a human, we are OK with that. If it looks like a robot and acts like a robot, we are OK with that, too; our brains have no difficulty processin! g the information. The trouble arises when – contrary to a lifetime of expectations – appearance and motion are at odds.

“As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners,” the researchers write. “Or perhaps, we will decide it is not a good idea to make them so closely in our image after all.”

Saygin thinks it’s “not so crazy to suggest we brain-test-drive robots or animated characters before spending millions of dollars on their development.”

It’s not too practical, though, to do these test-drives in expensive and hard-to-come-by fMRI scanners. So Saygin and her students are currently on the hunt for an analogous EEG signal. EEG technology is cheap enough that the electrode caps are being developed for home use.

The research was funded by the Kavli Institute for Brain and Mind at UC San Diego. Saygin was additionally supported by the Cal! ifornia Institute of Telecommunication and Information Technology (Calit2) at UCSD.

Saygin’s coauthors are Thierry Chaminade of Mediterranean Institute for Cognitive Neuroscience, France; Hiroshi Ishiguro of Osaka University and ATR, Japan; Jon Driver of University College London; and Chris Firth of University of Aarhus, Denmark.

Media Contact: Inga Kiderra, 858-822-0661, or ikiderra@ucsd.edu"

Wednesday, July 06, 2011

What is the point of Art? A view from neuroscience

In conjunction with Jericho House (http://www.jerichohouse.org.uk/) there will be a talk by Prof Colin Blakemore, FRS:

What's the Point of Art? A view from neuroscience.

Time: 7-8pm

Date: 20th July 2011

Location: Lecture theatre, 33 Queen Square

There will be a discussion after the talk, led by Prof Geraint Rees.

Prof Sophie Scott will chair the event.

The talk is open to all. Please register by sending an email to sophie.scott@ucl.ac.uk

What's The Point of Art?

In the aftermath of the recent cuts arguments about arts funding are becoming increasingly heated, yet crucial discussions as to the value and place of art in our world are distinguished by their absence. As the wider society experiences the kind of structural economic changes unseen in the UK for sixty years, the time has never been riper for a serious investigation of the role of art in our lives, and of its relationship with the individual, the state and the market.

For 2011, in collaboration with University College London, we are developing a sequence of six events on the theme ‘what is the point of art?’, each featuring a single speaker, to take place in London on dates throughout the year. The aim is to have accumulated by the end a compelling portfolio of perspectives on the value of art in our society.

Tuesday, July 05, 2011

The Illusion of Continuity, Berliner and Cohen (2011)

How we perceive spatiotemporal continuity across edited sequences of film is clearly a major interest of mine (hence the blog name!). Several theorists have written about this topic going all the way back to Munsterberg (1916), Hochberg & Brooks in the 70s and early 80s, and the pioneering Change Blindness studies of Levin & Simons in the 90s. In my thesis I discussed in depth the paradox of perceiving continuous visual scenes across visually discontinuous shots. For instance, why is a series of shots all filmed from the same side of an action such as a conversation perceived as being spatially continuous where as a shot that crosses to the other side of the action leads to spatial confusion? This filming and editing convention is known as the 180 Degree Rule. You can see clear demonstrations of it in this video:





Now a recent paper on the perception of continuity in film by Berliner and Cohen (2011) wonderfully brings together psychological evidence and film theory to provide an accessible and insightful overview of the topic. I highly recommend this article if you are looking for a quick reader on continuity perception.

Berliner and Cohen (2011) clearly outline the psychological motivation for the continuity editing rules but also acknowledge the role of intuition and creativity of filmmakers in choosing the right techniques to convey their particular story.

"Continuity conventions have remained relatively stable for about ninety years. The primary reason for their stability is not, as some scholars think, Hollywood’s marketing dominance or other externalities but rather that the early filmmakers who first developed the conventions were guided by their intuitive understanding of space perception and the reactions of cinema spectators. Just as expert pool players learn— not through direct study but intuitively, through trial and error—the principles of Newtonian physics that govern pool playing, as well as matter and energy generally, the filmmakers in the early twentieth century who first developed the conventions of the classical editing system, without directly studying psychology, discovered the structure of human perception."

Berliner, T. & Cohen, D. J. (2011) The Illusion of Continuity: Active Perception and the Classical Editing System.”Journal of Film and Video 63.1: 44-63.