Psychology Labs and Resources

Hope University Logo

Department of Psychology

Human Face Databases

0-9 | A | B | C | D | E | F | G | H | I | J | K | L |
M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |

Attribution

This page has been adapted from the Princeton University Library Face Image Databases by Meghan Testerman which is licensed under a Creative Commons Attribution 4.0 International License. Creative Commons Licence

Use of Face Stimulus Databases

The following is a directory of alphabetically listed databases containing face stimulus sets available for use in behavioural research studies. Please read the rights, permissions, licensing information on the database's webpage before proceeding with use. Make sure to obtain the permissions required and credit/cite as requested by the creators.

 
 

10k US Adult Faces Database

Link

Description

This database contains 10,168 natural face photographs and several measures for 2,222 of the faces, including memorability scores, computer vision and psychology attributes, and landmark point annotations. The face photographs are JPEGs with 72 pixels/in resolution and 256-pixel height.

Reference

Bainbridge, W.A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face images. Journal of Experimental Psychology: General. Journal of Experimental Psychology: General, 142(4), 1323-1334.

Contact

Wilma Bainbridge: brainbridgelab@gmail.com

Attribution

Permission required for access via online form. Always use citation

Fig 1. Exemplar face images from the 10k US Adult Faces Database
exemplar face images from the 10k US Adult Faces Database  
 

American Multiracial Face Database up arrow to click to go back to the top

Link

Description

This database contains 110 faces (smiling and neutral expression poses) with mixed-race heritage and accompanying ratings of those faces by naive observers that are freely available to academic researchers. The faces were rated on attractiveness, emotional expression, racial ambiguity, masculinity, racial group membership(s), gender group membership(s), warmth, competence, dominance, and trustworthiness.

Reference

Chen, J.M., Norman, J.B. & Nam, Y. Broadening the stimulus set: Introducing the American Multiracial Faces Database. Behav Res (2020). doi.org/10.3758/s13428-020-01447-8

Contact

https://jacquelinemchen.wixsite.com/sciplab/face-database

Attribution

The AMFD is free to use for academic research. It is subject to a Creative Commons Attribution 4.0 International Public License.

Fig 2. Exemplar face images from the 10k US Adult Faces Database
exemplar face images from the 10k US Adult Faces Database  
 

Amsterdam Dynamic Facial Expression Set (ADFES) up arrow to click to go back to the top

Link

Description

ADFES contains filmed emotional expressions from 22 Northern-European and Mediterranean models (10 female/12 male). The set features displays of nine emotions: the six basic emotions (anger, disgust, fear, joy, sadness, and surprise), as well as contempt, pride and embarrassment.

Reference

Van der Schalk, J., Hawk, S. T., Fischer, A. H., & Doosje, B. J. (in press).Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES), Emotion.

Contact

Agneta Fisher, a.h.fischer@uva.nl

Attribution

CC-By Attribution 4.0 International- Request permission

Fig 3. Exemplar faces from the ADFES
exemplar faces from the ADFES  
 

AT&T Databases of Faces up arrow to click to go back to the top

Link

Description

This database contains a set of face images taken between April 1992 and April 1994. There are ten different images of each of 40 distinct individuals. For some individuals, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).

Reference

Samaria, F. S. (1994). Face recognition using hidden Markov models (Doctoral dissertation, University of Cambridge).

Contact

AT&T Laboratories Cambridge

Attribution

Fig 4. Exemplars from the A, T & T Database
Exemplars from the A, T & T Database  
 

Basel Face Database (BFD) up arrow to click to go back to the top

Link

Description

The Basel Face Database is built upon portrait photographs of forty different individuals. All these photographs have been manipulated to appear more or less agentic and communal (Big Two personality dimensions) as well as open to experience, conscientious, extraverted, agreeable, and neurotic (Big Five personality dimensions). Thus, the database consists of forty photographs of different individuals and 14 variations of each of them signaling different personalities. Using this database therefore allows to investigate the impact of personality on different outcome variables in a very systematic way.

Reference

Walker, M., Schönborn, S., Greifeneder, R., & Vetter, T. (2018). The Basel Face Database: A validated set of photographs reflecting systematic differences in Big Two and Big Five personality dimensions. PloS one, 13(3). doi: https://doi.org/10.1371/journal.pone.0193190

Contact

Mirella Walker

Attribution

Request permission for scientific use.

Fig 5. Exemplars from the Basel Face Database
Exemplars from the Basel Face Database  
 

Bogazici Face Database up arrow to click to go back to the top

Link

Description

The Bogazici Face Database contains images of Turkish undergraduate student targets. High-resolution standardized photographs were taken and supported by the following materials: (a) basic demographic and appearance-related information, (b) two types of landmark configurations (for Webmorph and geometric morphometrics (GM)), (c) facial width-to-height ratio (fWHR) measurement, (d) information on photography parameters, (e) perceptual norms provided by raters.

Reference

Saribay SA, Biten AF, Meral EO, Aldan P, Třebický V, Kleisner K (2018) The Bogazici face database: Standardized photographs of Turkish faces with supporting materials. PLoS ONE 13(2): e0192018. https://doi.org/10.1371/journal.pone.0192018

Contact

Attribution

Fig 6.Exemplars from the Bogazici Face Database
Exemplars from the Bogazici Face Database  
 

CalTech 10k Web Faces up arrow to click to go back to the top

Link

Description

The Caltech database contains images of people collected from the web by typing common given names into Google Image Search. The coordinates of the eyes, the nose and the center of the mouth for each frontal face are provided in a ground truth file. This information can be used to align and crop the human faces or as a ground truth for a face detection algorithm. The dataset has 10,524 human faces of various resolutions and in different settings, e.g. portrait images, groups of people, etc. Profile faces or very low-resolution faces are not labeled.

Reference

Anelia Angelova, Yaser Abu-Mostafa, Pietro Perona, Pruning Training Sets for Learning of Object Categories , Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005

Contact

Anelia Angelova, anelia@caltech.edu

Attribution

Fig 7. Exemplars from the CalTech 10k Web Faces Database
Exemplars from the CalTech 10k Web Faces Database  
 

Chicago Face Database up arrow to click to go back to the top

Link

Description

The CFD is intended for use in scientific research. It provides high-resolution, standardised photographs of male and female faces of varying ethnicity between the ages of 17-65. Extensive norming data are available for each individual model. These data include both physical attributes (e.g., face size) as well as subjective ratings by independent judges (e.g., attractiveness). The database consists of a main image set and several extension sets.

The main CFD set consists of images of 597 unique individuals. They include self-identified Asian, Black, Latino, and White female and male models, recruited in the United States. All models are represented with neutral facial expressions. A subset of the models is also available with happy (open mouth), happy (closed mouth), angry, and fearful expressions.

CFD-MR

The CFD-MR extension set includes images of 88 unique individuals, who self-reported multiracial ancestry. All models were recruited in the United States. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressions are in production and will become available with a future update of the database.

CFD-INDIA

The CFD-INDIA extension set includes images of 142 unique individuals, recruited in Delhi, India. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressionsare in production and will become available with a future update of the database.

References

Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior research methods, 47(4), 1122-1135.

Wittenbrink, (2020). Chicago Face Database: Multiracial Expansion. Behavior Research Methods. https://doi.org/10.3758/s13428-020-01482-5.

Lakshmi, Wittenbrink, Correll, & Ma (2020). The India Face Set: International and Cultural Boundaries Impact Face Impressions and Perceptions of Category Membership. Frontiers in Psychology, 12, 161. https://doi.org/10.3389/fpsyg.2021.627678.

Contact

Bernd Wittenbrink, bernd.wittenbrink@chicagobooth.edu

Attribution

The CFD and its expansion sets are a free resource for the scientific community. The database photographs and their accompanying information may be used free of charge for non-commercial scientific research purposes only. The database materials cannot be re-distributed or published without written consent from the copyright holder, the University of Chicago, Center for Decision Research.

Fig 8. Exemplars from the Chicago Database
Exemplars from the Chicago Database  
 

Child Affective Facial Expression Set (CAFE) up arrow to click to go back to the top

Link

Description

The Child Affective Facial Expressions Set (CAFE) is the first large and representative set of children posing a variety of affective facial expressions that can be used for scientific research. The set is made up of nearly 1200 photographs of over 100 children (ages 2-8) making 7 different facial expressions - happy, angry, sad, fearful, surprise, neutral, and disgust.

Reference

LoBue, V. & Thrasher, C. (2015). The Child Affective Facial Expression (CAFE) Set: Validity and reliability from untrained adults. Frontiers in Emotion Science, 5.

Contact

Attribution

Apply for use https://nyu.databrary.org/volume/30

Fig 9. Exemplars from the CAFE Database
Exemplars from the CAFE Database  
 

Children Spontaneous Facial Expression Video Database (LIRIS-CSE) up arrow to click to go back to the top

Link

Description

A novel emotional database that contains movie clips/dynamic images of 12 ethnically diverse children. This unique database contains spontaneous/natural facial expression of children in diverse settings with diverse recording scenarios showing six universal or prototypic emotional expressions (happiness, sadness, anger, surprise, disgust and fear). Children are recorded in constraint free environment (no restriction on head movement, no restriction on hands movement, free sitting setting, no restriction of any sort) while they watched specially built/selected stimuli. This constraint free environment allowed the recording of spontaneous/natural expression of children as they occur.

Reference

Khan, R.A., Arthur, C., Meyer, A., Bouakaz, S. (2018). A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE). Image and Vision Computing, Volumes 83-84, March-April 2019. arXiv (2018) preprint, arXiv:1812.01555.

Contact

Request Form

Attribution

Fig 10. Exemplars from the CSFE Video Database
Exemplars from the CSFE Video  Database  
 

City Infant Faces Database up arrow to click to go back to the top

Link

Description

This database contains 60 photographs of positive infant faces, 54 photographs of negative infant faces, and 40 photographs of neutral infant faces. The images have high criterion validity and good test–retest reliability.

Reference

Webb, R., Ayers, S. & Endress, A. The City Infant Faces Database: A validated set of infant facial expressions. Behav Res 50, 151–159 (2018). https://doi.org/10.3758/s13428-017-0859-9

Contact

Rebecca Webb

Attribution

Fig 11. Exemplars from the City Infant Faces Database
Exemplars from the City Infant Faces Database  
 

CMU Multi-PIE face database up arrow to click to go back to the top

Link

Description

CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. Subjects were imaged under 15 viewpoints and 19 illumination conditions while displaying a range of facial expressions.

Reference

Sim, T., Baker, S., & Bsat, M. (2001). The CMU pose, illumination and expression database of human faces. Carnegie Mellon University Technical Report CMU-RI-TR-OI-02.

Contact

Ralph Gross, ralph@multiple.org

Attribution

Fig 12. Exemplars from the CMU Database
Exemplars from the CMU Database  
 

Cohn-Kanade AU-Coded Facial Expression Database up arrow to click to go back to the top

Link

Description

The Cohn-Kanade AU-Coded Facial Expression Database affords a test bed for research in automatic facial image analysis and is available for use by the research community. Image data consist of approximately 500 image sequences from 100 subjects. Accompanying meta-data include annotation of FACS action units and emotion-specified expressions. Subjects range in age from 18 to 30 years. Sixty-five percent were female; 15 percent were African-American and three percent Asian or Latino.

Subjects were instructed by an experimenter to perform a series of 23 facial displays that included single action units (e.g., AU 12, or lip corners pulled obliquely) and action unit combinations (e.g., AU 1+2, or inner and outer brows raised). Each begins from a neutral or nearly neutral face. For each, an experimenter described and modeled the target display. Six were based on descriptions of prototypic emotions (i.e., joy, surprise, anger, fear, disgust, and sadness).

Reference

Kanade, T., Cohn, J. F., & Tian, Y. (2000, March). Comprehensive database for facial expression analysis. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580) (pp. 46-53). IEEE.

Contact

Takeo Kanade, kanade@andrew.cmu.edu

Attribution

Fig 13. Exemplars from the CK Database
Exemplars from the CK Database  
 

Complex Emotion Expression Database (CEED) up arrow to click to go back to the top

Link

Description

The Complex Emotion Expression Database (CEED) , a digital stimulus set of 243 basic and 237 complex emotional facial expressions. The stimuli represent six basic expressions (angry, disgusted, fearful, happy, sad, and surprised) and nine complex expressions (affectionate, attracted, betrayed, brokenhearted, contemptuous, desirous, flirtatious, jealous, and lovesick) that were posed by Black and White formally trained, young adult actors.

Reference

Benda MS, Scherf KS (2020) The Complex Emotion Expression Database: A validated stimulus set of trained actors. PLoS ONE 15(2): e0228248. https://doi.org/10.1371/journal.pone.0228248

Contact

Attribution

Fig 14. Exemplars from the CEED Database
Exemplars from the CEED Database  
 

CVL Database up arrow to click to go back to the top

Link

Description

The Computer Vision Laboratory Face Database contains photographs of 114 persons approximately 18 years of age, 7 images per person.

Reference

Mirage 2003, Conference on Computer Vision / Computer Graphics Collaboration for Model-based Imaging, Rendering, image Analysis and Graphical special Effects, March 10-11 2003, INRIA Rocquencourt, France, Wilfried Philips, Rocquencourt, INRIA, 2003, pp. 38-47.

Contact

peter.peer@fri.uni-lj.si; you will get a licence agreement to sign

Attribution

Fig 15. Exemplars from the CVL Database
Exemplars from the CVL Database  
 

Dartmouth Database of Children's Faces up arrow to click to go back to the top

Link

Description

The Dartmouth Database of Children's Faces contains images of 40 male and 40 female models between the ages of 6 and 16. Models are photographed on a black background and are wearing black bibs and black hats to cover hair and ears. They are photographed from 5 different camera angles and pose 8 different facial expressions. Models were rated by independent raters and are ranked for the overall believability of their poses.

Reference

Dalrymple, K. A., Gomez, J., & Duchaine, B. (2013). The Dartmouth Database of Children’s Faces: Acquisition and validation of a new face stimulus set. PloS one, 8(11), e79131.

Contact

Kristen Dalrymple, kad@umn.edu

Attribution

Fig 16. Exemplars from the Dartmouth Database
Exemplars from the Dartmouth Database  
 

Face Database up arrow to click to go back to the top

Link

Description

The Face Database consists of 575 individual faces ranging from ages 18 to 93. Our database was developed to be more representative of age groups across the lifespan, with a special emphasis on recruiting older adults. The resulting database has faces of 218 adults age 18-29, 76 adults age 30-49, 123 adults age 50-69, and 158 adults age 70 and older.

Reference

Minear, M., & Park, D. C. (2004). A lifespan database of adult facial stimuli. Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc, 36(4), 630–633. https://doi.org/10.3758/bf03206543

Contact

parklab@utdallas.edu

Attribution

Faces may not be used for media events under any circumstances. If you publish a manuscript in a scientific journal that used the faces, please use the citation below

 
 

Face Image Meta-Database (fIMDb) & ChatLab Facial Anomaly Database up arrow to click to go back to the top

Link

Description

fIMDb

An index of face databases, their features, and how to access them has been unavailable. The “Face Image Meta-Database” (fIMDb) provides researchers with the tools to find the face images best suited to their research. The fIMDb is available from: https://cliffordworkman.com/resources/

ChatLab Facial Anomaly Database

The CFAD was developed to facilitate research on biases towards individuals with facial anomalies. The database allows searching by age, sex, ethnicity, pose, and type/etiology of anomaly. Results include the original stimuli, as well as images at various stages of pre-processing, e.g., normalized to interpupillary distance.

Reference

Workman, C. I., & Chatterjee, A. (2020, June 24). The Face Image Meta-Database (fIMDb) & ChatLab Facial Anomaly Database (CFAD): Tools for research on face perception and social stigma. https://doi.org/10.1016/j.metip.2021.100063

Contact

Attribution

If you are planning to publish research that used the CFAD stimuli, please cite us

Fig 17. Exemplar from the CFAD Database"
Exemplar from the CFAD  Database  
 

Face Place(s) up arrow to click to go back to the top

Link

Description

This dataset includes multiple photographs for over 200 individuals of many different races with consistent lighting, multiple views, real emotions, and disguises (and some participants returned for a second session several weeks later with a haircut, or a new beard, etc.). The images are in jpeg format, 250x250 72 dpi 24 bit color.

Reference

Righi, G, Peissig, JJ, & Tarr, MJ (2012) Recognizing disguised faces. Visual Cognition, 20(2), 143-169. doi:10.1080/13506285.2012.654624

Contact

Tarr Lab, Carnegie Mellon University, tarrlab@gmail.com

Attribution

If you use any of these images in publicly available work - talks, papers, etc. - you must acknowledge their source and adhere to the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. You must also include the following: Face images courtesy of Michael J. Tarr, Carnegie Mellon University, http://www.tarrlab.org/. Funding provided by NSF award 0339122.

Fig 18. Sample from dataset
Tarr Lab Face Place image showing four faces used in behavioural research  
 

Face Recognition Technology (FERET) up arrow to click to go back to the top

Link

Description

FERET

The FERET database was collected in 15 sessions between August 1993 and July 1996. The database contains 1564 sets of images for a total of 14,126 images that includes 1199 individuals and 365 duplicate sets of images. A duplicate set is a second set of images of a person already in the database and was usually taken on a different day.

Color FERET

As part of the FERET program, a database of facial imagery was collected between December 1993 and August 1996. The database is used to develop, test, and evaluate face recognition algorithms.

Reference

Phillips, P. J., Martin, A., Wilson, C. L., & Przybocki, M. (2000). An introduction evaluating biometric systems. Computer, 33(2), 56-63.

Contact

P. Jonathon Phillips, jonathon.phillips@nist.gov

Attribution

Fig 19. Exemplars from the colour FERET Database
Exemplars from the colour FERET  Database  
 

Face Research Lab - London Set up arrow to click to go back to the top

Link

Description

The London Set contains Images are of 102 adult faces 1350x1350 pixels in full color.

Reference

DeBruine, Lisa; Jones, Benedict (2017): Face Research Lab London Set. figshare. Dataset. https://doi.org/10.6084/m9.figshare.5047666.v5

Contact

Attribution

Fig 20. Exemplars from the London Set Database
Exemplars from the London Set  Database  
 

Face Research Toolkit (FaReT) up arrow to click to go back to the top

Link

Description

A free and open-source toolkit of three-dimensional models and software to study face perception. Contains 8 manipulatable facial expression models.

Reference

Hays, J. S., Wong, C., & Soto, F. (2020). FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behavior Research Methods, 5.(6), 2604-2622.

Contact

Fabian Soto, Florida International University

Attribution

Fig 21.
Exemplar from the FaReT Database  
 

FACES up arrow to click to go back to the top

Link

FACES

FACES is a set of images of naturalistic faces of 171 young (n = 58), middle-aged (n = 56), and older (n = 57) women and men displaying each of six facial expressions: neutrality, sadness, disgust, fear, anger, and happiness. The database comprises two sets of pictures per person and per facial expression (a vs. b set), resulting in a total of 2,052 images.

Dynamic FACES

Dynamic FACES is an extension of the original FACES database. It is a database of morphed videos (n = 1,026) of young, middle-aged, and older adults displaying six naturalistic emotional facial expressions including neutrality, sadness, disgust, fear, anger, and happiness. Static images used for morphing came from the original FACES database. Videos were created by transitioning from a static neutral image to a target emotion. Videos are available in 384 x 480 pixels as .mp4 files or in original size of 1280 x1600 as .mov files.

Scrambled FACES

All 2,052 images from the original FACES database were scrambled using MATLAB. With the randblock function, original FACES files were treated as 800x1000x3 matrices – the third dimension denoting specific RGB values – and partitioned into non-overlapping 2x2x3 blocks. The matrices were then randomly shuffled by these smaller blocks, providing final images that matched the dimensions of the original image and were composed of the same individual pixels, although arranged differently. All scrambled images are 800x1000 jpeg files (96 dpi).

References

Ebner, N., Riediger, M., & Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior research Methods, 42, 351-362. doi:10.3758/BRM.42.1.351.

Holland, C. A. C., Ebner, N. C., Lin, T., & Samanez-Larkin, G. R. (2019). Emotion identification across adulthood using the Dynamic FACES database of emotional expressions in younger, middle aged, and older adults. Cognition and Emotion, 33, 245-257. doi:10.1080/02699931.2018.1445981.

Fig 22.Exemplar from the FACES Database
Exemplar from the FACES Database  
 

FaceScrub up arrow to click to go back to the top

Link

Description

A dataset with a total of 106,863 face images* of male and female 530 celebrities, with about 200 images per person. As such, it is one of the largest public face databases.

Reference

H.-W. Ng, S. Winkler. A data-driven approach to cleaning large face datasets. Proc. IEEE International Conference on Image Processing (ICIP), Paris, France, Oct. 27-30, 2014.

Contact

Request Form

 
 

FAMED Face Database (Video) up arrow to click to go back to the top

Link

Description

The Faces and Motion Exeter Database (FAMED) is a video database of 32 male actors for use in psychological research. Each actor was filmed from two viewpoints (full-face and three-quarter) whilst they performed a series of facial motions including the telling of three jokes, a short conversation, six facial expressions (smiling, anger, fear, disgust, surprise and sadness) and rigid motion such as head rotation from left to right and up and down. The actors performed all actions three times; once with no headgear, once wearing a swimming cap to hide hair cues and once whilst wearing a wig.

Reference

Longmore, C. A., & Tree, J. J. (2013). Motion as a cue to face recognition: Evidence from congenital prosopagnosia. Neuropsychologia, 51, 864-875

Contact

Chris Longmore, chris.longmore@plymouth.ac.uk

Fig 23.Exemplar from the FACES Database
Exemplar from the FAMED Database  
 

FEI Face Database up arrow to click to go back to the top

Link

Description

The FEI Face Database is a Brazilian face database that contains a set of face images taken between June 2005 and March 2006 at the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil. There are 14 images for each of 200 individuals, a total of 2800 images. All images are colourful and taken against a white homogenous background in an upright frontal position with profile rotation of up to about 180 degrees. Scale might vary about 10% and the original size of each image is 640x480 pixels. All faces are mainly represented by students and staff at FEI, between 19 and 40 years old with distinct appearance, hairstyle, and adorns. The number of male and female subjects are exactly the same and equal to 100.

Reference

Contact

Carlos Eduardo Thomaz, cet@fei.edu.br

Fig 24.Exemplar from the FEI Face Database
Exemplar from the FEI Database  
 

FG-NET Database with Facial Expressions and Emotions up arrow to click to go back to the top

Link

Description

An image database containing face images showing a number of subjects performing the six different basic emotions defined by Eckman & Friesen. The database has been developed in an attempt to assist researchers who investigate the effects of different facial expressions.

Reference

Frank Wallhoff; Bjorn Schuller; Michael Hawellek; Gerhard Rigoll: Efficient Recognition of Authentic Dynamic Facial Expressions on the Feedtum Database IEEE ICME, page 493-496. IEEE Computer Society, (2006)

Contact

Frank Wallhoff, frank.wallhoff@jade-hs.de.

Fig 25.Exemplar from the FEED Database
Exemplar from the FEED Database  
 

Glasgow Unfamiliar Face Database (GUFD) up arrow to click to go back to the top

Link

Description

This database contains three images of 303 identities (each taken using separate cameras), similarity data quantifying perceived similarity between any two identities and 20 images per identity that have been extracted from a video clip for the purpose of familiarisation.

Contact

Mike Burton, mike.burton@york.ac.uk

 
 

Japanese and Caucasian Faces (Emotion and Neutral) up arrow to click to go back to the top

Link:

Description

Japanese and Caucasian Facial Expressions of Emotion - JACFEE

Consists of 56 color photographs of 56 different individuals who each illustrate one of the seven basic facial expressions of emotion.

Fee: $95

Fig 25.Exemplar from the JAFCEE Database
Exemplar from the JAFCEE Database

JACNeuf

Consists of 56 color photographs of the subjects found in the JACFEE collection showing neutral facial expressions.

Fee: $95

Fig 26.Exemplar from the JACNeuf Database
Exemplar from the JACNeuf Database  
 

Japanese Female Facial Expression (JAFFE) Dataset up arrow to click to go back to the top

Link

Description

Japanese Female Facial Expression (JAFFE) Dataset contains 213 images of 10 Japanese female expressers.

Reference

Lyons, Michael, Kamachi, Miyuki, & Gyoba, Jiro. (1998). The Japanese Female Facial Expression (JAFFE) Dataset [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3451524

Contact

Michael Lyons, ORCID

Attribution

The JAFFE images may be used for non-commercial scientific research under certain terms of use, which must be accepted to access the data. JAFFE cannot be provided for the following:

 
 

Karolinska Directed Emotional Faces (KDEF) up arrow to click to go back to the top

Link

Description

The Karolinska Directed Emotional Faces (KDEF) is a set of totally 4900 pictures of human facial expressions of emotion. The set contains 70 individuals, each displaying 7 different emotional expressions, each expression being photographed (twice) from 5 different angles.

Reference

Goeleven, E., De Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska directed emotional faces: a validation study. Cognition and emotion, 22(6), 1094-1118.

Contact

Emotion Lab at Karolinska Institutet

Attribution

The KDEF stimuli may be used without charge for non-commercial research purposes only. All and any (re-)distribution and publishing without the written consent of the copyright holders is forbidden. Copyright holder is Karolinska Institutet, Department of Clinical Neuroscience, Section of Psychology, Stockholm, Sweden.

Fig 27. Exemplars from the KDEF Database
Exemplars from the KDEF  Database  
   
 

Labeled Faces in the Wild up arrow to click to go back to the top

Link

Description

The Labeled Faces in the Wild is a database of face photographs designed for studying the problem of unconstrained face recognition. The data set contains more than 13,000 images of faces collected from the web.

Reference

Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, October, 2007.

Contact

Gary Huang, gbhuang@cs.umass.edu

Fig 28. Exemplars from the LFW Database
Exemplars from the LFW Database  
 

Libor Spacek's Facial Images Databases up arrow to click to go back to the top

Link

Description

This database contains

Reference

D. Hond, L. Spacek `Distinctive Descriptions for Face Processing', Proceedings of the 8th British Machine Vision Conference BMVC97, Colchester, England, pp. 320-329, September 1997

Contact

Attribution

Conditions of use: You may freely download this data for your own research purposes. You should publish any computer recognition results achieved on this data with due acknowledgement (my name and URL to this page). There is also a related publication (with my PhD student D.Hond).

Fig 29. Exemplars from the Spacek Database
Exemplars from the Spacek Database  
 

Makeup Datasets up arrow to click to go back to the top

Link

Description

This dataset comprises of four datasets of female face images assembled for studying the impact of makeup on face recognition.

YouTube Makeup (YMU)

151 subjects, specifically Caucasian females, from YouTube makeup tutorials, before and after the application of makeup. There are four shots per subject: two shots before the application of makeup and two shots after the application of makeup.

Virtual Makeup (VMU)

VMU (Virtual Makeup): face images of Caucasian female subjects in the FRGC repository (http://www.nist.gov/itl/iad/ig/frgc.cfm) were synthetically modified to simulate the application of makeup on 51 female Caucasian subjects.

Makeup Induced Face Spoofing (MIFS)

Dataset consisting of 107 makeup-transformations taken from random YouTube makeup video tutorials. Each subject is attempting to spoof a target identity (celebrity)

References

Dantcheva, C. Chen, A. Ross, "Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?," Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.

C. Chen, A. Dantcheva, A. Ross, "Automatic Facial Makeup Detection with Application in Face Recognition," Proc. of 6th IAPR International Conference on Biometrics (ICB), (Madrid, Spain), June 2013.

A. Dantcheva, C. Chen, A. Ross, "Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?," Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.

C. Chen, A. Dantcheva, T. Swearingen, A. Ross, "Spoofing Faces Using Makeup: An Investigative Study," Proc. of 3rd IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), (New Delhi, India), February 2017.

Fig 30. Exemplars from the YMU Database
Exemplars from the  YMU Database
Fig 31. Exemplars from the VMUDatabase
Exemplars from the  YMU Database
Fig 32. Exemplars from the MIFS Database
Exemplars from the MIFS  Database  
 

Messiner African American and Caucasian Male Sets up arrow to click to go back to the top

Link

Description

These sets contain stimuli for use in our studies on cross-racial face recognition and identification. The sets are available by email request to Dr. Meissner for those seeking to conduct research on face identification. Our stimuli currently include African American and Caucasian male faces in two poses (smiling w/ casual clothing and non-smiling with burgundy sweatshirt).

Reference

Reference: Meissner, C. A., Brigham, J. C., & Butz, D. A. (2005). Memory for own and other race faces: A dual process approach. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 19(5), 545-567.

Contact

Christian Meissener cmeissner@utep.edu

Attribution

To request access to these materials, please email Dr. Meissner via the following address cmeissner@utep.edu?subject=Face Stimuli Request

Fig 33. Exemplars from the Messiner Database
Exemplars from the Messiner  DatabaseExemplars from the Messiner  Database  
 

Montreal Set of Facial Displays of Emotion (MSFDE) up arrow to click to go back to the top

Link

Description

MSFDE consists of emotional facial expressions by men and women of European, Asian, and African descent. Each expression was created using a directed facial action task and all expressions were FCAS coded to assure identical expressions across actors.

The set contains expressions of happiness, sadness, anger, fear, disgust, and embarrassment as well as a neutral expression for each actor.

Contact

Social Psychophysiology Laboratory, Université du Québec à Montréal

Attribution

Fig 34. Exemplars from the MSFDE Database
Exemplars from the MSFDE  Database  
 

MMI Facial Expression Database up arrow to click to go back to the top

Link

Description

The MMI Facial Expression Database is an ongoing project, that aims to deliver large volumes of visual data of facial expressions to the facial expression analysis community. The database consists of over 2900 videos and high-resolution still images of 75 subjects.

Reference

Valstar, M., & Pantic, M. (2010, May). Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect (p. 65).

Contact

Request a user account

Attribution

Fig 35. Exemplars from the MMI Database
Exemplars from the MMI  Database  
 

MR2 Face Database up arrow to click to go back to the top

Link

Description

The MR2 is a multi-racial, mega-resolution database of facial stimuli, created in collaboration with the psychologist Kurt Gray and the photographer Titus Brooks Heagins. It contains 74 full-color images of men and women of European, African, and East Asian descent.

Reference

Strohminger, N., Gray, K., Chituc, V., Heffner, J., Schein, C., and Heagins, T.B. (in press). The MR2: A multi-racial mega-resolution database of facial stimuli. Behavior Research Methods.

Contact

Nina Strohminger, humean@wharton.upenn.edu

Attribution

The database is free to access, with the proviso that any publication or presentation using the database give proper attribution. The MR2 face database is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Fig 36. Exemplar from the MR2 Database
Exemplar from the MR2  Database  
 

MUCT Face Database up arrow to click to go back to the top

Link

Description

The MUCT Face Database consists of 3755 faces with 76 manual landmarks. The database was created to provide more diversity of lighting, age, and ethnicity than currently available landmarked 2D face databases.

Reference

Milborrow, S., Morkel, J., & Nicolls, F. (2010). The MUCT landmarked face database. Pattern recognition association of South Africa, 201(0).

Contact

Stephen Milborrow, milbo@sonic.net

Fig 37. Exemplars from the MUCT Database
Exemplars from the MUCT  Database  
 

NimStim Set of Facial Expressions up arrow to click to go back to the top

Link

Description

The NimStem Set of Facial Expressions is a broad dataset comprising of 672 images of naturally posed photographs by 43 professional actors (18 female, 25 male) ranging from 21 to 30 years old. Actors from a diverse sample were chosen to portray emotional expressions within this dataset. To be precise, the actors were African-American (N = 10), Asian-American (N = 6), European-American (N = 25), Latino-American (N = 2). The images contained in this dataset include eight emotional expressions, namely: neutral, angry, disgust, surprise, sad, calm, happy, and afraid. Both open and closed mouth versions were provided for all emotional expressions, with the exception of surprise (only open mouth provided) and happy (high arousal open mouth/exuberant provided).

Reference

Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., ... & Nelson, C. (2009). The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Research, 168(3), 242-249.

Contact

Please direct questions and comments to admin@macbrain.org

 
 

Oslo Face Database up arrow to click to go back to the top

Link

Description

The Oslo Face Database consists of ~200 male and female faces of neutral expression with three gaze directions: left, center and right. The photos were taken in 2012 of students from the University of Oslo.

 
 

Oulu-CASIA NIR&VIS Facial Expression Database up arrow to click to go back to the top

Link

Description

This set contains videos with the six typical expressions (happiness, sadness, surprise, anger, fear, disgust) from 80 subjects captured with two imaging systems, NIR (Near Infrared) and VIS (Visible light), under three different illumination conditions: normal indoor illumination, weak illumination (only computer display is on) and dark illumination (all lights are off).

Reference

Zhao, G., Huang, X., Taini, M., Li, S. Z., & PietikäInen, M. (2011). Facial expression recognition from near-infrared videos. Image and Vision Computing, 29(9), 607-619.

Contact

Guoying Zhao, guoying.zhao@oulu.fi

 
 

Psychological Image Collection at Sterling (PICS) up arrow to click to go back to the top

Description

The Psychogical Image Collection at Stirling (PICS) contains two databases of face images.

Link   Link  

Reference: varies

Contact

Peter Hancock, pjbh1@stir.ac.uk

Fig 38. Exemplars from the PICS Database
Exemplars from the PICS  Database  
 

Radboud Faces Database up arrow to click to go back to the top

Link

Description

The Radboud Faces Database (RaFD)is a set of pictures of 67 models (including Caucasian males and females, Caucasian children, both boys and girls, and Moroccan Dutch males) displaying 8 emotional expressions. The RaFD in an initiative of the Behavioural Science Institute of the Radboud University Nijmegen, which is located in Nijmegen (the Netherlands), and can be used freely for non-commercial scientific research by researchers who work for an officially accredited university.

Reference

Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition & Emotion, 24(8), 1377—1388. DOI: 10.1080/02699930903485076

Contact

info@rafd.nl

Fig 39. Exemplars from the RaFD Database
Exemplars from the RaFD  Database  
 

RADIATE Emotional Face Stimulus Set up arrow to click to go back to the top

Link

Description

Radiate is an open-access face stimulus set of 1721 racially diverse expressions is described. Sixteen different emotions in color and in black and white versions are included.

Reference

Conley, M. I., Dellarco, D. V., Rubien-Thomas, E., Cohen, A. O., Cervera, A., Tottenham, N., & Casey, B. J. (2018). The racially diverse affective expression (RADIATE) face stimulus set. Psychiatry research.

 
 

Sheffield Face Database up arrow to click to go back to the top

Link

Description

The Sheffield Database (previously UMIST) consists of 564 images of 20 individuals (mixed race/gender/appearance). Each individual is shown in a range of poses from profile to frontal views – each in a separate directory labelled 1a, 1b, … 1t and images are numbered consecutively as they were taken. The files are all in PGM format, approximately 220 x 220 pixels with 256-bit grey-scale.

Reference

Wechsler, H., Phillips, J. P., Bruce, V., Soulie, F. F., & Huang, T. S. (Eds.). (2012). Face recognition: From theory to applications (Vol. 163). Springer Science & Business Media.

Contact

Laboratory of Vision Engineering (LoVE), University of Lincoln

Attribution

The authors grant the right to use the face database with the following restrictions:

Fig 40. Exemplars from the Sheffield Database
Exemplars from the Sheffield  Database  
 

Todorov Synthetic Faces Databases up arrow to click to go back to the top

Link

Description

A compendium of computer-generated synthetic faces.

References: varies

Contact

Alexander Todorov, University of Chicago

Database 1

300 randomly generated faces parametrically manipulated to vary on their perceived value on social dimensions such as trustworthiness and dominance. These faces were generated by data-driven computational models.

Database 2

525 faces manipulated on face shape: 25 (face identities) x 3 (trait dimensions: perceived dominance, threat, and trustworthiness) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD).

Database 3

490 faces manipulated on face shape and orthogonally on perceived trustworthiness and dominance: 10 (face identities) x 7 (parametric face manipulations on perceived dominance, ranging from -3 to +3SD with a step of 1SD) x 7 (parametric face manipulations on perceived trustworthiness, ranging from -3 to +3SD with a step of 1SD).

Database 4

3,675 faces manipulated on face shape and reflectance: 25 (face identities) x 7 (trait dimensions: perceived attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD) x 3 (face race: Asian, Black, White).

Database 5

13,125 faces manipulated on face shape and reflectance: 25 (face identities) x 7 (trait dimensions: perceived attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness) x 25 (parametric face manipulations, ranging from -3 to +3SD with a step of 0.25SD) x 3 (face race: Asian, Black, White).

Database 6

4,000 faces used to build a model of attractiveness. Text files, data files, and python and Matlab scripts are also included

Database 7

1,400 faces manipulated on face shape and reflectance by gender-specific models built by Oh, Dotsch, Porter, & Todorov (2020): 25 (face identities) x 2 (gender models: for males and females) x 2 (trait dimensions: perceived dominance and trustworthiness) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD) x 2 (face gender: male and female).

Database 8

350 faces manipulated on perceived competence controlling for attractiveness: 25 (face identities) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD) x 2 (models: attractiveness-subtracted and attractiveness-orthogonal).

Attribution

The databases listed above are freely available to researchers who intend to conduct non-profit, academic research. Researchers who download the databases should use the stimuli for non-profit research only and should acknowledge the proper sources of the stimuli and any references relevant to the data set.

 
 

UB KinFace up arrow to click to go back to the top

Link

Description

UB KinFace database is used to develop, test, and evaluate kinship verification and recognition algorithms. It comprises 600 images of 400 people which can be separated into 200 groups. Each group is composed of child, young parent and old parent images. Most of images in the database are real-world collections of public figures (celebrities and politicians) from Internet. To the best of our knowledge, it is the first database that contains all children, young parents and old parents for the purpose of kinship verification.

Reference

Ming Shao, Siyu Xia and Yun Fu, “Genealogical Face Recognition based on UB KinFace Database,” IEEE CVPR Workshop on Biometrics (BIOM), 2011.

Contact

Yun Raymond Fu, yunfu@ece.neu.edu

Attribution

This dataset is for non-commercial research purposes only. The image copyright belongs to the original author or the media as listed in the following URL file. If you find this collection useful for your research, please cite the paper above.

 
 

US Politicians up arrow to click to go back to the top

Link

Description

This database contains 550 photos of US politicians who competed either in a gubernatorial race (248) or in a house race (302). The database also contains the politicians’ perceived competence from their photos, as measured in a forced choice competence judgement of participants unfamiliar with the politicians. As such, these judgments simply indicate perceptions and are in no way indicative of the actual competence of the politicians.

Contact

Alexander Todorov, University of Chicago

Attribution

The database listed is freely available to researchers who intend to conduct non-profit, academic research. Researchers who download the databases should use the stimuli for non-profit research only and should acknowledge the proper sources of the stimuli and any references relevant to the data set.

 
 

Yale Face Database A up arrow to click to go back to the top

Link

Description

Yale Face Database A

The Yale Face Database contains 165 grayscale images in GIF format of 15 individuals. There are 11 images per subject, one per different facial expression or configuration: center-light, w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink.

Yale Face Database B up arrow to click to go back to the top

Link

Description

Yale Face Database B

The Yale Face Database B (1GB) contains 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions).

Reference

Contact

UCSD Computer Vision

Attribution

It is free to use the data for research purposes. If experimental results are obtained that use images from within the database, all publications of these results should acknowledge the use of the "Yale Face Database".

 
 

Yonsei Face Database (YFace DB) up arrow to click to go back to the top

Link

Description

The Yonsei Face Database (YFace DB), consists of both static and dynamic face stimuli for six basic emotions (happiness, sadness, anger, surprise, fear, and disgust), and to test its validity. The database includes selected pictures (static stimuli) and film clips (dynamic stimuli) of 74 models (50% female) aged between 19 and 40.

Reference

Chung, K. M, Kim, S.J., Jung, W. H., & Kim, V. Y. (2019). Development and Validation of the Yonsei Face Database (Yface DB). Frontiers in Psychology, 10, 2626. https://doi.org/10.3389/fpsyg.2019.02626

Attribution

Only a PHD-holding faculty at a non-profit, degree-granting, academic institution or a representative of an affiliation colud request for the use of Y-face DB.

 
  Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.