
The difficult for those with prosopagnosia to identify regular faces is equitable to trying to detect upside down faces
The Four Eyes Lab includes similar studies of facial recognition and the process of tracking individual's motion/mannerisms. Matthew Turk, a professor in the Computer Science Department, is involved in a myriad of Four Eyes project involving human-computer interaction and gesture recognition. The first project titled “4EyesFace – Detecting, Tracking and Aligning Faces in Real-Time” hopes to track faces from video and align the facial expressions to models. The method entails Active Wavelet Networks, an improvement on the Active Appearance Models, which combine shape and texture models. AWN uses wavelets to account for occlusions and variation in illumination, a problem for the prior AAM system.

Since this project has already been discussed by another student during our Computational Photography assignment it will not be my primary focus, but I include it for the sake of variety to the study on facial recognition.
Another project by Turk and Ya Chang titled “Probabilistic Expression Recognition on Manifolds” creates a probabilistic video-to-video facial expression recognition based on manifold of facial expression. The system learns to group together images with different facial expressions based on its collection of varied expression. Manifold facial expression relies on the concept that images of an individual's face while transitioning from one expression to the next happens in a smooth process over high dimensional space—from frame to frame, the subject's face maintains primary features with subtle adjustments which can be tracked. Active Wavelets Networks are yet again applied to account for variation due to scaling, illumination condition, and face pose.

Locally linear embedding, a method of visualizing the structure of facial manifolds, is used to create a continuous spectrum of images on specific charts (called manifold visualizations) and uses such sequences to represent the images in a “path”. This ultimately allows the researchers to generate a probabilistic model for which to identify the likelihood of certain expressions following each other, as demonstrated by the six basic paths. For example, the system can track the likelihood of an expression of anger transitioning either to disgust (high likelihood) or happiness (low likelihood).

Through the combination of work done by the psychology lab along with technological advancements by Four Eyes, I hope to propose a more comprehensive and relatable art installation which addresses both the cognitive and visual aspects of how humans interact, possibly delving further into the problem of face blindness and suggesting a “solution” through my final visual representation.
Sources:
Face Blindness by UCSB Alum Brad Duchaine:
http://www.cbsnews.com/8301-18560_162-5 ... -stranger/
Four Eyes Lab:
http://ilab.cs.ucsb.edu/index.php/compo ... icle/12/34
http://ilab.cs.ucsb.edu/index.php/compo ... icle/12/45