This page has been adapted from the Princeton University Library Face Image Databases by Meghan Testerman which is licensed under a Creative Commons Attribution 4.0 International License.
The following is a directory of alphabetically listed databases containing face stimulus sets available for use in behavioural research studies. Please read the rights, permissions, licensing information on the database's webpage before proceeding with use. Make sure to obtain the permissions required and credit/cite as requested by the creators.
This database contains 10,168 natural face photographs and several measures for 2,222 of the faces, including memorability scores, computer vision and psychology attributes, and landmark point annotations. The face photographs are JPEGs with 72 pixels/in resolution and 256-pixel height.
Bainbridge, W.A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face images. Journal of Experimental Psychology: General. Journal of Experimental Psychology: General, 142(4), 1323-1334.
Wilma Bainbridge: brainbridgelab@gmail.com
Permission required for access via online form. Always use citation
This database contains 110 faces (smiling and neutral expression poses) with mixed-race heritage and accompanying ratings of those faces by naive observers that are freely available to academic researchers. The faces were rated on attractiveness, emotional expression, racial ambiguity, masculinity, racial group membership(s), gender group membership(s), warmth, competence, dominance, and trustworthiness.
Chen, J.M., Norman, J.B. & Nam, Y. Broadening the stimulus set: Introducing the American Multiracial Faces Database. Behav Res (2020). doi.org/10.3758/s13428-020-01447-8
https://jacquelinemchen.wixsite.com/sciplab/face-database
The AMFD is free to use for academic research. It is subject to a Creative Commons Attribution 4.0 International Public License.
ADFES contains filmed emotional expressions from 22 Northern-European and Mediterranean models (10 female/12 male). The set features displays of nine emotions: the six basic emotions (anger, disgust, fear, joy, sadness, and surprise), as well as contempt, pride and embarrassment.
Van der Schalk, J., Hawk, S. T., Fischer, A. H., & Doosje, B. J. (in press).Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES), Emotion.
Agneta Fisher, a.h.fischer@uva.nl
CC-By Attribution 4.0 International- Request permission
This database contains a set of face images taken between April 1992 and April 1994. There are ten different images of each of 40 distinct individuals. For some individuals, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
Samaria, F. S. (1994). Face recognition using hidden Markov models (Doctoral dissertation, University of Cambridge).
AT&T Laboratories Cambridge
The Basel Face Database is built upon portrait photographs of forty different individuals. All these photographs have been manipulated to appear more or less agentic and communal (Big Two personality dimensions) as well as open to experience, conscientious, extraverted, agreeable, and neurotic (Big Five personality dimensions). Thus, the database consists of forty photographs of different individuals and 14 variations of each of them signaling different personalities. Using this database therefore allows to investigate the impact of personality on different outcome variables in a very systematic way.
Walker, M., Schönborn, S., Greifeneder, R., & Vetter, T. (2018). The Basel Face Database: A validated set of photographs reflecting systematic differences in Big Two and Big Five personality dimensions. PloS one, 13(3). doi: https://doi.org/10.1371/journal.pone.0193190
Mirella Walker
Request permission for scientific use.
The Bogazici Face Database contains images of Turkish undergraduate student targets. High-resolution standardized photographs were taken and supported by the following materials: (a) basic demographic and appearance-related information, (b) two types of landmark configurations (for Webmorph and geometric morphometrics (GM)), (c) facial width-to-height ratio (fWHR) measurement, (d) information on photography parameters, (e) perceptual norms provided by raters.
Saribay SA, Biten AF, Meral EO, Aldan P, Třebický V, Kleisner K (2018) The Bogazici face database: Standardized photographs of Turkish faces with supporting materials. PLoS ONE 13(2): e0192018. https://doi.org/10.1371/journal.pone.0192018
The Caltech database contains images of people collected from the web by typing common given names into Google Image Search. The coordinates of the eyes, the nose and the center of the mouth for each frontal face are provided in a ground truth file. This information can be used to align and crop the human faces or as a ground truth for a face detection algorithm. The dataset has 10,524 human faces of various resolutions and in different settings, e.g. portrait images, groups of people, etc. Profile faces or very low-resolution faces are not labeled.
Anelia Angelova, Yaser Abu-Mostafa, Pietro Perona, Pruning Training Sets for Learning of Object Categories , Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005
Anelia Angelova, anelia@caltech.edu
The CFD is intended for use in scientific research. It provides high-resolution, standardised photographs of male and female faces of varying ethnicity between the ages of 17-65. Extensive norming data are available for each individual model. These data include both physical attributes (e.g., face size) as well as subjective ratings by independent judges (e.g., attractiveness). The database consists of a main image set and several extension sets.
The main CFD set consists of images of 597 unique individuals. They include self-identified Asian, Black, Latino, and White female and male models, recruited in the United States. All models are represented with neutral facial expressions. A subset of the models is also available with happy (open mouth), happy (closed mouth), angry, and fearful expressions.
The CFD-MR extension set includes images of 88 unique individuals, who self-reported multiracial ancestry. All models were recruited in the United States. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressions are in production and will become available with a future update of the database.
The CFD-INDIA extension set includes images of 142 unique individuals, recruited in Delhi, India. The images depict models with neutral facial expressions. Additional facial expression images with happy (open mouth), happy (closed mouth), angry, and fearful expressionsare in production and will become available with a future update of the database.
Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior research methods, 47(4), 1122-1135.
Wittenbrink, (2020). Chicago Face Database: Multiracial Expansion. Behavior Research Methods. https://doi.org/10.3758/s13428-020-01482-5.
Lakshmi, Wittenbrink, Correll, & Ma (2020). The India Face Set: International and Cultural Boundaries Impact Face Impressions and Perceptions of Category Membership. Frontiers in Psychology, 12, 161. https://doi.org/10.3389/fpsyg.2021.627678.
Bernd Wittenbrink, bernd.wittenbrink@chicagobooth.edu
The CFD and its expansion sets are a free resource for the scientific community. The database photographs and their accompanying information may be used free of charge for non-commercial scientific research purposes only. The database materials cannot be re-distributed or published without written consent from the copyright holder, the University of Chicago, Center for Decision Research.
The Child Affective Facial Expressions Set (CAFE) is the first large and representative set of children posing a variety of affective facial expressions that can be used for scientific research. The set is made up of nearly 1200 photographs of over 100 children (ages 2-8) making 7 different facial expressions - happy, angry, sad, fearful, surprise, neutral, and disgust.
LoBue, V. & Thrasher, C. (2015). The Child Affective Facial Expression (CAFE) Set: Validity and reliability from untrained adults. Frontiers in Emotion Science, 5.
Apply for use https://nyu.databrary.org/volume/30
A novel emotional database that contains movie clips/dynamic images of 12 ethnically diverse children. This unique database contains spontaneous/natural facial expression of children in diverse settings with diverse recording scenarios showing six universal or prototypic emotional expressions (happiness, sadness, anger, surprise, disgust and fear). Children are recorded in constraint free environment (no restriction on head movement, no restriction on hands movement, free sitting setting, no restriction of any sort) while they watched specially built/selected stimuli. This constraint free environment allowed the recording of spontaneous/natural expression of children as they occur.
Khan, R.A., Arthur, C., Meyer, A., Bouakaz, S. (2018). A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE). Image and Vision Computing, Volumes 83-84, March-April 2019. arXiv (2018) preprint, arXiv:1812.01555.
This database contains 60 photographs of positive infant faces, 54 photographs of negative infant faces, and 40 photographs of neutral infant faces. The images have high criterion validity and good test–retest reliability.
Webb, R., Ayers, S. & Endress, A. The City Infant Faces Database: A validated set of infant facial expressions. Behav Res 50, 151–159 (2018). https://doi.org/10.3758/s13428-017-0859-9
CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. Subjects were imaged under 15 viewpoints and 19 illumination conditions while displaying a range of facial expressions.
Sim, T., Baker, S., & Bsat, M. (2001). The CMU pose, illumination and expression database of human faces. Carnegie Mellon University Technical Report CMU-RI-TR-OI-02.
Ralph Gross, ralph@multiple.org
The Cohn-Kanade AU-Coded Facial Expression Database affords a test bed for research in automatic facial image analysis and is available for use by the research community. Image data consist of approximately 500 image sequences from 100 subjects. Accompanying meta-data include annotation of FACS action units and emotion-specified expressions. Subjects range in age from 18 to 30 years. Sixty-five percent were female; 15 percent were African-American and three percent Asian or Latino.
Subjects were instructed by an experimenter to perform a series of 23 facial displays that included single action units (e.g., AU 12, or lip corners pulled obliquely) and action unit combinations (e.g., AU 1+2, or inner and outer brows raised). Each begins from a neutral or nearly neutral face. For each, an experimenter described and modeled the target display. Six were based on descriptions of prototypic emotions (i.e., joy, surprise, anger, fear, disgust, and sadness).
Kanade, T., Cohn, J. F., & Tian, Y. (2000, March). Comprehensive database for facial expression analysis. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580) (pp. 46-53). IEEE.
Takeo Kanade, kanade@andrew.cmu.edu
The Complex Emotion Expression Database (CEED) , a digital stimulus set of 243 basic and 237 complex emotional facial expressions. The stimuli represent six basic expressions (angry, disgusted, fearful, happy, sad, and surprised) and nine complex expressions (affectionate, attracted, betrayed, brokenhearted, contemptuous, desirous, flirtatious, jealous, and lovesick) that were posed by Black and White formally trained, young adult actors.
Benda MS, Scherf KS (2020) The Complex Emotion Expression Database: A validated stimulus set of trained actors. PLoS ONE 15(2): e0228248. https://doi.org/10.1371/journal.pone.0228248
The Computer Vision Laboratory Face Database contains photographs of 114 persons approximately 18 years of age, 7 images per person.
Mirage 2003, Conference on Computer Vision / Computer Graphics Collaboration for Model-based Imaging, Rendering, image Analysis and Graphical special Effects, March 10-11 2003, INRIA Rocquencourt, France, Wilfried Philips, Rocquencourt, INRIA, 2003, pp. 38-47.
peter.peer@fri.uni-lj.si; you will get a licence agreement to sign
The Dartmouth Database of Children's Faces contains images of 40 male and 40 female models between the ages of 6 and 16. Models are photographed on a black background and are wearing black bibs and black hats to cover hair and ears. They are photographed from 5 different camera angles and pose 8 different facial expressions. Models were rated by independent raters and are ranked for the overall believability of their poses.
Dalrymple, K. A., Gomez, J., & Duchaine, B. (2013). The Dartmouth Database of Children’s Faces: Acquisition and validation of a new face stimulus set. PloS one, 8(11), e79131.
Kristen Dalrymple, kad@umn.edu
The Face Database consists of 575 individual faces ranging from ages 18 to 93. Our database was developed to be more representative of age groups across the lifespan, with a special emphasis on recruiting older adults. The resulting database has faces of 218 adults age 18-29, 76 adults age 30-49, 123 adults age 50-69, and 158 adults age 70 and older.
Minear, M., & Park, D. C. (2004). A lifespan database of adult facial stimuli. Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc, 36(4), 630–633. https://doi.org/10.3758/bf03206543
parklab@utdallas.edu
Faces may not be used for media events under any circumstances. If you publish a manuscript in a scientific journal that used the faces, please use the citation below
An index of face databases, their features, and how to access them has been unavailable. The “Face Image Meta-Database” (fIMDb) provides researchers with the tools to find the face images best suited to their research. The fIMDb is available from: https://cliffordworkman.com/resources/
The CFAD was developed to facilitate research on biases towards individuals with facial anomalies. The database allows searching by age, sex, ethnicity, pose, and type/etiology of anomaly. Results include the original stimuli, as well as images at various stages of pre-processing, e.g., normalized to interpupillary distance.
Workman, C. I., & Chatterjee, A. (2020, June 24). The Face Image Meta-Database (fIMDb) & ChatLab Facial Anomaly Database (CFAD): Tools for research on face perception and social stigma. https://doi.org/10.1016/j.metip.2021.100063
If you are planning to publish research that used the CFAD stimuli, please cite us
This dataset includes multiple photographs for over 200 individuals of many different races with consistent lighting, multiple views, real emotions, and disguises (and some participants returned for a second session several weeks later with a haircut, or a new beard, etc.). The images are in jpeg format, 250x250 72 dpi 24 bit color.
Righi, G, Peissig, JJ, & Tarr, MJ (2012) Recognizing disguised faces. Visual Cognition, 20(2), 143-169. doi:10.1080/13506285.2012.654624
Tarr Lab, Carnegie Mellon University, tarrlab@gmail.com
If you use any of these images in publicly available work - talks, papers, etc. - you must acknowledge their source and adhere to the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. You must also include the following: Face images courtesy of Michael J. Tarr, Carnegie Mellon University, http://www.tarrlab.org/. Funding provided by NSF award 0339122.
The FERET database was collected in 15 sessions between August 1993 and July 1996. The database contains 1564 sets of images for a total of 14,126 images that includes 1199 individuals and 365 duplicate sets of images. A duplicate set is a second set of images of a person already in the database and was usually taken on a different day.
As part of the FERET program, a database of facial imagery was collected between December 1993 and August 1996. The database is used to develop, test, and evaluate face recognition algorithms.
Phillips, P. J., Martin, A., Wilson, C. L., & Przybocki, M. (2000). An introduction evaluating biometric systems. Computer, 33(2), 56-63.
P. Jonathon Phillips, jonathon.phillips@nist.gov
The London Set contains Images are of 102 adult faces 1350x1350 pixels in full color.
DeBruine, Lisa; Jones, Benedict (2017): Face Research Lab London Set. figshare. Dataset. https://doi.org/10.6084/m9.figshare.5047666.v5
A free and open-source toolkit of three-dimensional models and software to study face perception. Contains 8 manipulatable facial expression models.
Hays, J. S., Wong, C., & Soto, F. (2020). FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behavior Research Methods, 5.(6), 2604-2622.
Fabian Soto, Florida International University
FACES is a set of images of naturalistic faces of 171 young (n = 58), middle-aged (n = 56), and older (n = 57) women and men displaying each of six facial expressions: neutrality, sadness, disgust, fear, anger, and happiness. The database comprises two sets of pictures per person and per facial expression (a vs. b set), resulting in a total of 2,052 images.
Dynamic FACES is an extension of the original FACES database. It is a database of morphed videos (n = 1,026) of young, middle-aged, and older adults displaying six naturalistic emotional facial expressions including neutrality, sadness, disgust, fear, anger, and happiness. Static images used for morphing came from the original FACES database. Videos were created by transitioning from a static neutral image to a target emotion. Videos are available in 384 x 480 pixels as .mp4 files or in original size of 1280 x1600 as .mov files.
All 2,052 images from the original FACES database were scrambled using MATLAB. With the randblock function, original FACES files were treated as 800x1000x3 matrices – the third dimension denoting specific RGB values – and partitioned into non-overlapping 2x2x3 blocks. The matrices were then randomly shuffled by these smaller blocks, providing final images that matched the dimensions of the original image and were composed of the same individual pixels, although arranged differently. All scrambled images are 800x1000 jpeg files (96 dpi).
Ebner, N., Riediger, M., & Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior research Methods, 42, 351-362. doi:10.3758/BRM.42.1.351.
Holland, C. A. C., Ebner, N. C., Lin, T., & Samanez-Larkin, G. R. (2019). Emotion identification across adulthood using the Dynamic FACES database of emotional expressions in younger, middle aged, and older adults. Cognition and Emotion, 33, 245-257. doi:10.1080/02699931.2018.1445981.
A dataset with a total of 106,863 face images* of male and female 530 celebrities, with about 200 images per person. As such, it is one of the largest public face databases.
H.-W. Ng, S. Winkler. A data-driven approach to cleaning large face datasets. Proc. IEEE International Conference on Image Processing (ICIP), Paris, France, Oct. 27-30, 2014.
The Faces and Motion Exeter Database (FAMED) is a video database of 32 male actors for use in psychological research. Each actor was filmed from two viewpoints (full-face and three-quarter) whilst they performed a series of facial motions including the telling of three jokes, a short conversation, six facial expressions (smiling, anger, fear, disgust, surprise and sadness) and rigid motion such as head rotation from left to right and up and down. The actors performed all actions three times; once with no headgear, once wearing a swimming cap to hide hair cues and once whilst wearing a wig.
Longmore, C. A., & Tree, J. J. (2013). Motion as a cue to face recognition: Evidence from congenital prosopagnosia. Neuropsychologia, 51, 864-875
Chris Longmore, chris.longmore@plymouth.ac.uk
The FEI Face Database is a Brazilian face database that contains a set of face images taken between June 2005 and March 2006 at the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil. There are 14 images for each of 200 individuals, a total of 2800 images. All images are colourful and taken against a white homogenous background in an upright frontal position with profile rotation of up to about 180 degrees. Scale might vary about 10% and the original size of each image is 640x480 pixels. All faces are mainly represented by students and staff at FEI, between 19 and 40 years old with distinct appearance, hairstyle, and adorns. The number of male and female subjects are exactly the same and equal to 100.
Carlos Eduardo Thomaz, cet@fei.edu.br
An image database containing face images showing a number of subjects performing the six different basic emotions defined by Eckman & Friesen. The database has been developed in an attempt to assist researchers who investigate the effects of different facial expressions.
Frank Wallhoff; Bjorn Schuller; Michael Hawellek; Gerhard Rigoll: Efficient Recognition of Authentic Dynamic Facial Expressions on the Feedtum Database IEEE ICME, page 493-496. IEEE Computer Society, (2006)
Frank Wallhoff, frank.wallhoff@jade-hs.de.
This database contains three images of 303 identities (each taken using separate cameras), similarity data quantifying perceived similarity between any two identities and 20 images per identity that have been extracted from a video clip for the purpose of familiarisation.
Mike Burton, mike.burton@york.ac.uk
Consists of 56 color photographs of 56 different individuals who each illustrate one of the seven basic facial expressions of emotion.
Fee: $95
Consists of 56 color photographs of the subjects found in the JACFEE collection showing neutral facial expressions.
Fee: $95
Japanese Female Facial Expression (JAFFE) Dataset contains 213 images of 10 Japanese female expressers.
Lyons, Michael, Kamachi, Miyuki, & Gyoba, Jiro. (1998). The Japanese Female Facial Expression (JAFFE) Dataset [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3451524
Michael Lyons, ORCID
The JAFFE images may be used for non-commercial scientific research under certain terms of use, which must be accepted to access the data. JAFFE cannot be provided for the following:
The Karolinska Directed Emotional Faces (KDEF) is a set of totally 4900 pictures of human facial expressions of emotion. The set contains 70 individuals, each displaying 7 different emotional expressions, each expression being photographed (twice) from 5 different angles.
Goeleven, E., De Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska directed emotional faces: a validation study. Cognition and emotion, 22(6), 1094-1118.
Emotion Lab at Karolinska Institutet
The KDEF stimuli may be used without charge for non-commercial research purposes only. All and any (re-)distribution and publishing without the written consent of the copyright holders is forbidden. Copyright holder is Karolinska Institutet, Department of Clinical Neuroscience, Section of Psychology, Stockholm, Sweden.
The Labeled Faces in the Wild is a database of face photographs designed for studying the problem of unconstrained face recognition. The data set contains more than 13,000 images of faces collected from the web.
Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, October, 2007.
Gary Huang, gbhuang@cs.umass.edu
This database contains
D. Hond, L. Spacek `Distinctive Descriptions for Face Processing', Proceedings of the 8th British Machine Vision Conference BMVC97, Colchester, England, pp. 320-329, September 1997
Conditions of use: You may freely download this data for your own research purposes. You should publish any computer recognition results achieved on this data with due acknowledgement (my name and URL to this page). There is also a related publication (with my PhD student D.Hond).
This dataset comprises of four datasets of female face images assembled for studying the impact of makeup on face recognition.
151 subjects, specifically Caucasian females, from YouTube makeup tutorials, before and after the application of makeup. There are four shots per subject: two shots before the application of makeup and two shots after the application of makeup.
VMU (Virtual Makeup): face images of Caucasian female subjects in the FRGC repository (http://www.nist.gov/itl/iad/ig/frgc.cfm) were synthetically modified to simulate the application of makeup on 51 female Caucasian subjects.
Dataset consisting of 107 makeup-transformations taken from random YouTube makeup video tutorials. Each subject is attempting to spoof a target identity (celebrity)
Dantcheva, C. Chen, A. Ross, "Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?," Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.
C. Chen, A. Dantcheva, A. Ross, "Automatic Facial Makeup Detection with Application in Face Recognition," Proc. of 6th IAPR International Conference on Biometrics (ICB), (Madrid, Spain), June 2013.
A. Dantcheva, C. Chen, A. Ross, "Can Facial Cosmetics Affect the Matching Accuracy of Face Recognition Systems?," Proc. of 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Washington DC, USA), September 2012.
C. Chen, A. Dantcheva, T. Swearingen, A. Ross, "Spoofing Faces Using Makeup: An Investigative Study," Proc. of 3rd IEEE International Conference on Identity, Security and Behavior Analysis (ISBA), (New Delhi, India), February 2017.
These sets contain stimuli for use in our studies on cross-racial face recognition and identification. The sets are available by email request to Dr. Meissner for those seeking to conduct research on face identification. Our stimuli currently include African American and Caucasian male faces in two poses (smiling w/ casual clothing and non-smiling with burgundy sweatshirt).
Reference: Meissner, C. A., Brigham, J. C., & Butz, D. A. (2005). Memory for own and other race faces: A dual process approach. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 19(5), 545-567.
Christian Meissener cmeissner@utep.edu
To request access to these materials, please email Dr. Meissner via the following address cmeissner@utep.edu?subject=Face Stimuli Request
MSFDE consists of emotional facial expressions by men and women of European, Asian, and African descent. Each expression was created using a directed facial action task and all expressions were FCAS coded to assure identical expressions across actors.
The set contains expressions of happiness, sadness, anger, fear, disgust, and embarrassment as well as a neutral expression for each actor.
Social Psychophysiology Laboratory, Université du Québec à Montréal
The MMI Facial Expression Database is an ongoing project, that aims to deliver large volumes of visual data of facial expressions to the facial expression analysis community. The database consists of over 2900 videos and high-resolution still images of 75 subjects.
Valstar, M., & Pantic, M. (2010, May). Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect (p. 65).
The MR2 is a multi-racial, mega-resolution database of facial stimuli, created in collaboration with the psychologist Kurt Gray and the photographer Titus Brooks Heagins. It contains 74 full-color images of men and women of European, African, and East Asian descent.
Strohminger, N., Gray, K., Chituc, V., Heffner, J., Schein, C., and Heagins, T.B. (in press). The MR2: A multi-racial mega-resolution database of facial stimuli. Behavior Research Methods.
Nina Strohminger, humean@wharton.upenn.edu
The database is free to access, with the proviso that any publication or presentation using the database give proper attribution. The MR2 face database is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The MUCT Face Database consists of 3755 faces with 76 manual landmarks. The database was created to provide more diversity of lighting, age, and ethnicity than currently available landmarked 2D face databases.
Milborrow, S., Morkel, J., & Nicolls, F. (2010). The MUCT landmarked face database. Pattern recognition association of South Africa, 201(0).
Stephen Milborrow, milbo@sonic.net
The NimStem Set of Facial Expressions is a broad dataset comprising of 672 images of naturally posed photographs by 43 professional actors (18 female, 25 male) ranging from 21 to 30 years old. Actors from a diverse sample were chosen to portray emotional expressions within this dataset. To be precise, the actors were African-American (N = 10), Asian-American (N = 6), European-American (N = 25), Latino-American (N = 2). The images contained in this dataset include eight emotional expressions, namely: neutral, angry, disgust, surprise, sad, calm, happy, and afraid. Both open and closed mouth versions were provided for all emotional expressions, with the exception of surprise (only open mouth provided) and happy (high arousal open mouth/exuberant provided).
Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., ... & Nelson, C. (2009). The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Research, 168(3), 242-249.
Please direct questions and comments to admin@macbrain.org
The Oslo Face Database consists of ~200 male and female faces of neutral expression with three gaze directions: left, center and right. The photos were taken in 2012 of students from the University of Oslo.
This set contains videos with the six typical expressions (happiness, sadness, surprise, anger, fear, disgust) from 80 subjects captured with two imaging systems, NIR (Near Infrared) and VIS (Visible light), under three different illumination conditions: normal indoor illumination, weak illumination (only computer display is on) and dark illumination (all lights are off).
Zhao, G., Huang, X., Taini, M., Li, S. Z., & PietikäInen, M. (2011). Facial expression recognition from near-infrared videos. Image and Vision Computing, 29(9), 607-619.
Guoying Zhao, guoying.zhao@oulu.fi
The Psychogical Image Collection at Stirling (PICS) contains two databases of face images.
LinkReference: varies
Peter Hancock, pjbh1@stir.ac.uk
The Radboud Faces Database (RaFD)is a set of pictures of 67 models (including Caucasian males and females, Caucasian children, both boys and girls, and Moroccan Dutch males) displaying 8 emotional expressions. The RaFD in an initiative of the Behavioural Science Institute of the Radboud University Nijmegen, which is located in Nijmegen (the Netherlands), and can be used freely for non-commercial scientific research by researchers who work for an officially accredited university.
Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition & Emotion, 24(8), 1377—1388. DOI: 10.1080/02699930903485076
info@rafd.nl
Radiate is an open-access face stimulus set of 1721 racially diverse expressions is described. Sixteen different emotions in color and in black and white versions are included.
Conley, M. I., Dellarco, D. V., Rubien-Thomas, E., Cohen, A. O., Cervera, A., Tottenham, N., & Casey, B. J. (2018). The racially diverse affective expression (RADIATE) face stimulus set. Psychiatry research.
The Sheffield Database (previously UMIST) consists of 564 images of 20 individuals (mixed race/gender/appearance). Each individual is shown in a range of poses from profile to frontal views – each in a separate directory labelled 1a, 1b, … 1t and images are numbered consecutively as they were taken. The files are all in PGM format, approximately 220 x 220 pixels with 256-bit grey-scale.
Wechsler, H., Phillips, J. P., Bruce, V., Soulie, F. F., & Huang, T. S. (Eds.). (2012). Face recognition: From theory to applications (Vol. 163). Springer Science & Business Media.
Laboratory of Vision Engineering (LoVE), University of Lincoln
The authors grant the right to use the face database with the following restrictions:
A compendium of computer-generated synthetic faces.
References: varies
Alexander Todorov, University of Chicago
300 randomly generated faces parametrically manipulated to vary on their perceived value on social dimensions such as trustworthiness and dominance. These faces were generated by data-driven computational models.
525 faces manipulated on face shape: 25 (face identities) x 3 (trait dimensions: perceived dominance, threat, and trustworthiness) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD).
490 faces manipulated on face shape and orthogonally on perceived trustworthiness and dominance: 10 (face identities) x 7 (parametric face manipulations on perceived dominance, ranging from -3 to +3SD with a step of 1SD) x 7 (parametric face manipulations on perceived trustworthiness, ranging from -3 to +3SD with a step of 1SD).
3,675 faces manipulated on face shape and reflectance: 25 (face identities) x 7 (trait dimensions: perceived attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD) x 3 (face race: Asian, Black, White).
13,125 faces manipulated on face shape and reflectance: 25 (face identities) x 7 (trait dimensions: perceived attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness) x 25 (parametric face manipulations, ranging from -3 to +3SD with a step of 0.25SD) x 3 (face race: Asian, Black, White).
4,000 faces used to build a model of attractiveness. Text files, data files, and python and Matlab scripts are also included
1,400 faces manipulated on face shape and reflectance by gender-specific models built by Oh, Dotsch, Porter, & Todorov (2020): 25 (face identities) x 2 (gender models: for males and females) x 2 (trait dimensions: perceived dominance and trustworthiness) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD) x 2 (face gender: male and female).
350 faces manipulated on perceived competence controlling for attractiveness: 25 (face identities) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD) x 2 (models: attractiveness-subtracted and attractiveness-orthogonal).
The databases listed above are freely available to researchers who intend to conduct non-profit, academic research. Researchers who download the databases should use the stimuli for non-profit research only and should acknowledge the proper sources of the stimuli and any references relevant to the data set.
UB KinFace database is used to develop, test, and evaluate kinship verification and recognition algorithms. It comprises 600 images of 400 people which can be separated into 200 groups. Each group is composed of child, young parent and old parent images. Most of images in the database are real-world collections of public figures (celebrities and politicians) from Internet. To the best of our knowledge, it is the first database that contains all children, young parents and old parents for the purpose of kinship verification.
Ming Shao, Siyu Xia and Yun Fu, “Genealogical Face Recognition based on UB KinFace Database,” IEEE CVPR Workshop on Biometrics (BIOM), 2011.
Yun Raymond Fu, yunfu@ece.neu.edu
This dataset is for non-commercial research purposes only. The image copyright belongs to the original author or the media as listed in the following URL file. If you find this collection useful for your research, please cite the paper above.
This database contains 550 photos of US politicians who competed either in a gubernatorial race (248) or in a house race (302). The database also contains the politicians’ perceived competence from their photos, as measured in a forced choice competence judgement of participants unfamiliar with the politicians. As such, these judgments simply indicate perceptions and are in no way indicative of the actual competence of the politicians.
Alexander Todorov, University of Chicago
The database listed is freely available to researchers who intend to conduct non-profit, academic research. Researchers who download the databases should use the stimuli for non-profit research only and should acknowledge the proper sources of the stimuli and any references relevant to the data set.
The Yale Face Database contains 165 grayscale images in GIF format of 15 individuals. There are 11 images per subject, one per different facial expression or configuration: center-light, w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink.
The Yale Face Database B (1GB) contains 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions).
It is free to use the data for research purposes. If experimental results are obtained that use images from within the database, all publications of these results should acknowledge the use of the "Yale Face Database".
The Yonsei Face Database (YFace DB), consists of both static and dynamic face stimuli for six basic emotions (happiness, sadness, anger, surprise, fear, and disgust), and to test its validity. The database includes selected pictures (static stimuli) and film clips (dynamic stimuli) of 74 models (50% female) aged between 19 and 40.
Chung, K. M, Kim, S.J., Jung, W. H., & Kim, V. Y. (2019). Development and Validation of the Yonsei Face Database (Yface DB). Frontiers in Psychology, 10, 2626. https://doi.org/10.3389/fpsyg.2019.02626
Only a PHD-holding faculty at a non-profit, degree-granting, academic institution or a representative of an affiliation colud request for the use of Y-face DB.