Kamis, 07 Juni 2018

Sponsored Links

Face Perception - YouTube
src: i.ytimg.com

Facial perception is the understanding and interpretation of a person's face, especially the human face, especially in relation to the processing of related information in the brain.

The proportions and facial expressions of humans are important for identifying origin, emotional trends, health quality, and some social information. From birth, the face is important in the social interaction of the individual. Facial perception is complex because the recognition of facial expression involves a wide and diverse area of ​​the brain. Sometimes, the damaged part of the brain can cause certain damage in the face or prosopagnosia.


Video Face perception



Development

From birth, the baby has an imperfect facial processing capacity and shows a high interest in the face. For example, newborns (1-3 days) have been shown to recognize faces even when they are rotated up to 45 degrees. However, interest in the face is not constantly present in infancy and shows an increase and decrease over time as the child grows older. Specifically, while newborns show a preference for the face, this behavior is reduced from one to four months of age. About three months of age, the preference for faces reappears and interest in the face seems to reach its peak late during the first year, but then decline again slowly over the next two years of life. Interestingly, the reappearance of preference for faces at the age of three months may be affected by the ability and experience of the child's own motor. Infants as young as two days are able to mimic adult facial expressions, displaying their capacity to record details like the mouth and eye shape and to move their own muscles in a way that produces similar patterns on their faces. However, regardless of this ability, newborns are not yet aware of the emotional content encoded in facial expressions. Five-month-olds, when presented with images of people who create expressions of fear and people who make happy expressions, pay the same attention and show the potential associated with similar events (ERP) for both. However, when seven-month-olds are given the same treatment, they focus more on frightening faces, and the potential associated with their event for fearful faces indicates a stronger initial negative central component than for a happy face. These results indicate increased attention and cognitive focus on fear that reflects the nature of these prominent threats. Moreover, the negative central component of the infant is no different for new faces that vary in the intensity of emotional expression but describes the same emotion as the familiar face but stronger for the face of different emotions, indicating that seven months-gradually assumes the face happy and sad as different emotional categories. While seven-month-olds have been found to focus more on frightening faces, other studies by Jessen, Altvater-Mackensen, and Grossmann found that "happy expression leads to an increase in sympathetic passion in infants" both when facial expressions are presented subliminally and when they are presented supraliminally , or in a way that the baby is consciously aware of the stimulus. These results indicate that the awareness of the stimulus is not related to the infant's reaction to the stimulus.

Facial recognition is an important neurological mechanism that individuals use in society every day. Jeffrey and Rhodes write that the face "conveys much of the information we use to guide our social interactions". Emotions play a big role in our social interactions. Perceptions of positive or negative emotions on the face affect the way a person perceives and processes the face. For example, faces that are considered to have negative emotions are processed less holistically than faces that feature positive emotions. Facial recognition abilities are evident even in early childhood. The neurological mechanism responsible for facial recognition is present at the age of five. Research shows that the way children process faces is similar to adults, but adults process faces more efficiently. The reason for this may be due to advances in memory and cognitive functioning that occur with age.

Babies can understand facial expressions as social cues that represent the feelings of others before they are one year old. At seven months, the object of emotional reactions that clearly visible face is relevant in processing the face. Babies at this age show a larger negative central component to angry faces that look directly at them than elsewhere, although the scary facial gaze does not make a difference. In addition, the two ERP components in the back of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can at least understand a higher level of threat from anger directed against them compared to anger directed elsewhere. At the age of at least seven months, babies can also use facial expressions to understand the behavior of others. Seven-month-olds will see facial cues to understand other people's motives in ambiguous situations, as demonstrated by a study where they watch the experimental face longer if he takes his toy from them and maintains a neutral expression rather than if he makes a happy expression. Interest in the social world increases due to interaction with the physical environment. Training three-month-old babies to reach objects with Velcro's "sticky gloves" increases the amount of attention they pay to the face compared to objects passively moving through their hands and untrained control groups.

By following the idea that seven-month-olds have a categorical understanding of emotions, they are also able to associate emotional prosodies with appropriate facial expressions. When presented with a happy or angry face, immediately followed by an emotionally neutral word read in a happy or angry tone, their ERP follows a different pattern. Happy faces followed by angry vocal tones produce more change than other unsuitable couples, while there is no difference between a congruent, happy partner and anger, with a larger reaction implying that the baby holds a greater expectation than the vocal tone happy after seeing happy face rather than angry tone following angry face. Considering the relative immobility of babies and thus their capacity decreases to induce negative reactions from their parents, this result implies that experience has a role in building understanding of facial expressions.

Several other studies have shown that early perception experiences are essential for the development of capacity characteristics of adult visual perception, including the ability to identify familiar people and to recognize and understand facial expressions. The ability to distinguish between faces, such as language, seems to have a wide potential in early life that is compacted into the kind of faces experienced early in life. Babies can distinguish between ape faces at age six months, but, without continued exposure, can not be at the age of nine months. Showing photographs of monkeys during this three-month period gave nine-month-olds the ability to distinguish clearly between unknown ape faces.

The neural substrate of facial perception in infants may be similar to that of adults, but the limits of imaging technology appropriate for use with current infants prevent the very specific localization of functions as well as specific information from the subcortical regions such as the amygdala, which are active in the perception of facial expression in adults. In a study of healthy adults, it was shown that the face is likely to be processed, in part, via a retinotectal (subcortical) pathway.

However, there is activity near the fusiform gyrus, as well as in the occipital region. when the baby is exposed to the face, and it varies depending on factors including facial expressions and eye-level directions.

Maps Face perception



Adult

Recognizing and perceiving the face is the vital ability needed to coexist in society. Faces can tell things like identity, mood, age, gender, race, and direction that someone is looking for. Studies based on neuropsychology, behavior, electrophysiology, and neuro-imaging have supported the idea of ​​a special mechanism for perceiving faces. Prosopagnosia patients show neuropsychological support for special facial perception mechanisms because these people, due to brain damage, have a deficit in facial perception, but their cognitive perception of the object remains intact. The effects of facial inversion provide behavioral support of special mechanisms because people tend to have larger deficits in task performance when asked to react to inverse faces rather than to reversed objects. Electrophysiological support stems from the finding that the N170 and M170 responses tend to be face-specific. Neuro-imaging studies such as PET and fMRI studies have shown support for special facial processing mechanisms because they have identified areas of fusiform gyrus that have higher activation during facial perception tasks than other visual perception tasks. The theory of the processes involved in adult facial perception is largely derived from two sources: research on the perception of normal adult faces and the study of disorders in facial perception caused by brain injury or neurological disease. New optical illusions such as Flicker Face Distortion Effect, where scientific phenomenology goes beyond neurological theory, also provides an area for research.

One of the most widely accepted theories of facial perception holds that facial comprehension involves several stages: from the manipulation of basic perceptions of sensory information to obtain details about the person (such as age, sex or attraction), to be able to remember meaningful details such as names them and the relevant past experience of the individual.

This model (developed by psychologists Vicki Bruce and Andrew Young) argues that facial perception may involve multiple independent sub-processes that work in unison. "Display centered imagery" is derived from perceptual input. The simple physical aspect of the face is used to determine age, gender or basic facial expression. Most of the analysis at this stage is feature-by-feature. The initial information was used to create a structural model of the face, allowing it to be compared to other faces in memory, and across the display. Having some exposure to face this structural code allows us to recognize that face in different contexts. This explains why the same person seen from a new angle is still recognizable. This structural encoding may look specific to the upright face as shown by the Thatcher effect. The structurally encoded representation is transferred to the "face recognition unit" used with the "personal identity node" to identify a person through information from semantic memory. The natural ability to produce a person's name when presented with their face has been shown in experimental research to become damaged in some cases of brain injury, suggesting that naming may be a separate process from other memory information about a person.

The study of prosopagnosia (a facial recognition disorder usually caused by brain injury) has been very helpful in understanding how normal facial perception might work. Individuals with prosopagnosia may differ in their ability to understand faces, and have investigated these differences that have suggested that some stage theory may be correct.

Facial perception is an ability that involves many areas of the brain; however, some areas have proved very important. Brain imaging studies usually show a lot of activity in the temporal lobe area known as the fusiform gyrus, an area known to cause prosopagnosia when damaged (especially when damage occurs on both sides). This evidence has caused a special interest in this area and is sometimes referred to as the (FFA) for that reason.

Facial processing neuroanatomy

There are several parts of the brain that play a role in facial perception. Rossion, Hanseeuw, and Dricot use the fMRI BOLD mapping to identify activation in the brain when the subject sees the car and face. The majority of FMRI BOLD studies use contrasting contrasting blood oxygen levels (BOLD) to determine which areas of the brain are activated by various cognitive functions. They found that the occipital facial region, located in the occipital lobe, the facial area of ​​the fusiform, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe, all the roles played in the facial contrast of the car, with early facial perception beginning in the area and facial region occipital. The whole region is connected to form a network that serves to distinguish the face. The processing of faces in the brain is known as the perception of "part number". However, each part of the face must be processed first so that all pieces can be fused. In the initial process, the occipital facial area contributes to facial perception by recognizing the eyes, nose, and mouth as individual pieces. Furthermore, Arcurio, Gold, and James use the FMRI BOLD mapping to determine activation patterns in the brain when facial parts are presented in combination and when they are presented singly. The occipital facial area is activated by the visual perception of a single facial feature, for example, the nose and mouth, and the preferred combination of two eyes above the other combinations. This study supports that the occipital facial region recognizes the facial parts in the early stages of recognition. In contrast, the fusiform face area shows no preference for a single feature, since the fusiform face area is responsible for "holistic/configural" information, meaning that it places all pieces processed from the face together in processing later. This theory is supported by the work of Gold et al. who finds that regardless of face orientation, subjects are affected by the configuration of individual facial features. The subject is also affected by the encoding of the relationship between the features. This indicates that the processing is done by the sum of the parts at the later recognition stage.

Facial perception has been well-identified, neuroanatomical correlation in the brain. During facial perception, large activation occurs in the bilateral ekstrastriate area, especially in the fusiform face area, the occipital facial area (OFA), and the superior temporal sulcus (fSTS). Understanding the inverted human face involves increased activity in the inferior temporal cortex, while observing uneven faces involves increased activity in the occipital cortex. However, none of these results were found when observing a dog's face, suggesting that this process may be specific to the perception of a human face.

The fusiform face area is located on the lateral fusiform gyrus. It is estimated that this area is involved in the processing of faces holistically and sensitively to the presence of facial parts and the configuration of these parts. Fusiform facial area is also needed to detect and identify a successful face. This is supported by fMRI activation and the study of prosopagnosia, involving lesions in the fusiform facial area.

OFA lies in the inferior occipital gyrus. Similar to FFA, this area is also active during face detection and successful identification, a finding supported by fMRI activation. OFA is involved and required in the analysis of the facial parts but not within the distance or configuration of the facial parts. This indicates that OFA may be involved in facial processing steps that occur prior to FFA processing.

The fSTS is involved in the recognition of the facial parts and is not sensitive to the configuration of these parts. It is also considered that this area is involved in views. The fSTS has shown increased activation when attending to gaze for directions. Recent studies have attempted to illustrate whether the fSTS area lights up when people use the gaze of others to build mutual attention rather than fSTS but other areas (the gaze following patches still close to fSTS but not overlapping) are found to be at the core of the human gaze processing system.

Bilateral activation is generally shown in all these special facial areas. However, there are several studies that include increased activation on one side over another. For example McCarthy (1997) has shown that the right fusiform gyrus is more important for facial processing in complex situations.

Gorno-Tempini and Price have shown that fusiform gyri is more responsive to the face, whereas gyri parahippocampal/lingual is responsive to buildings.

It is important to note that while certain areas respond selectively to the face, facial processing involves many neural networks. This network includes visual and emotional processing systems as well. Emotional facial treatment research has shown that there are several other functions in the workplace. While looking at emotionally charged faces (especially those with fearful facial expressions) compared to neutral faces, there is an increased activity in the right fusiform gyrus. This increase in activity is also correlated with increased amygdala activity in similar situations. The observed emotional processing effect on fusiform gyrus decreases in patients with amygdala lesions. This shows the possible relationship between the amygdala and the facial processing area.

Another aspect that affects both fusiform gyrus and amygdala activation is facial familiarity. Having multiple areas that can be activated by the same face component shows that the facial process is a complex process. Platek and Kemp (2009) further demonstrate increased brain activation in precuneus and cuneus when differentiation of two faces is easy (eg, relatives and non-kin familiar faces) and the role of the posterior medial substrate for visual processing of faces with familiar features (eg, with a sibling's face).

Ishai and colleagues have proposed the object to form a topological hypothesis, which argues that there is a topological organization of the neural substrate for objects and facial processes. However, Gauthier disagreed and pointed out that the category-specific model and process-map can accommodate most of the other proposed models for the neural grounds of facial processing.

Most neuroanatomic substrate for facial treatment is perfused by middle cerebral artery (MCA). Therefore, facial treatment has been studied using mean measurements of brain blood flow velocities in the cerebral artery of bilateral media. During facial recognition tasks, greater changes in the middle right cerebral media artery (RMCA) than left (LMCA) have been observed. It has been demonstrated that lateral and lateral males are left laterally during the facial treatment process.

Just like memory and cognitive function that separates the ability of children and adults to recognize faces, facial familiarity can also play a role in facial perception. Zheng, Mondloch, and Segalowitz noted the potential associated with events in the brain to determine face recognition time in the brain. The results show that the known face is indicated and recognized by a stronger N250, specific wavelength response that plays a role in the visual memory of the face. Similarly, Moulson et al. found that all faces caused a N170 response in the brain.

Using fMRI with single electrophysiological recording units, Doris Tsao's group revealed the code that the brain uses to process faces on apes. The conceptual brain requires only ~ 50 neurons to encode the human face, with facial features projected onto individual axes (neurons) in 50-dimensional "Face Space".

Hemispheric asymmetry in facial processing capability

The mechanisms underlying gender-related differences in facial treatment have not been studied extensively.

Studies using electrophysiological techniques have shown gender-related differences during facial memory recognition tasks (FRM) and faces affecting assignment assignment (FAIT). Male subjects use rights, while female subjects use a left hemisphere neural activation system in facial and facial processing. Moreover, in facial perception there is no association with intelligence estimates, indicating that facial recognition performance in women is not associated with some basic cognitive processes. Gender-related differences may indicate a role for sex hormones. In women there may be variability to psychological functions associated with different hormone levels during different phases of the menstrual cycle.

Data is obtained normally and pathologically supports asymmetric facial treatment. Gorno-Tempini and others in 2001, suggested that the lower left frontal cortex and bilateral oxypitotemporal junction respond equally to all facial conditions. Some neurologists have argued that both the left inferior frontal cortex (Brodmann area 47) and occipitotemporal joints are involved in facial memory. Inferior inward/temporal/fusiform girth are most selectively responsive to the face but not to the non-face. The right temporal pole is activated during face discrimination and familiar sights of the unknown. The right asymmetry in the mid temporal lobe for the face has also been demonstrated using 133-Xenon measured cerebral blood flow (CBF). Other researchers have observed proper lateralisation for facial recognition in previous electrophysiological and imaging studies.

The implication of asymmetric observation for facial perception is that different hemispheric strategies will be implemented. The right hemisphere will be expected to use a holistic strategy, and left the analytic strategy. In 2007, Philip Njemanze, using functional functional techniques of transcranial Doppler (fTCD) called transcranial functional Doppler spectroscopy (fTCDS), showed that men correctly lateralized for objects and facial perceptions, while women were left laterally for facial tasks but showed the right tendency. or no lateralization for object perception. Njemanze was demonstrated using fTCDS, a summation of responses related to the complexity of facial stimuli, which can be considered as evidence for the organization of this cortical area topology in men. This may indicate that the latter extends from the area involved in the perception of the object to a much larger area involved in facial perception.

This agrees with the object-type topological hypothesis proposed by Ishai and colleagues in 1999. However, object interrelations and facial perceptions are based on the process, and appear to be related to their general holistic processing strategy in the right hemisphere. In addition, when the same man is presented with a facial paradigm that requires analytic processing, the left hemisphere is activated. This is in accordance with the suggestions made by Gauthier in 2000, that the extrastriate cortex contains the most suitable area for different calculations, and is described as a process map model. Therefore, the proposed model is not mutually exclusive, and this underscores the fact that facial treatment does not impose new constraints on the brain other than those used for other stimuli.

It may be advisable that each stimulus be mapped by category to face or non-face, and by the process of being holistic or analytic. Therefore, an integrated custom category process mapping system is implemented for right or left cognitive styles. Njemanze in 2007, concluded that, for facial perception, men use a category-specific mapping system for true cognitive styles, but women are used equally for left.

Cognitive neuroscience

Cognitive neuroscientists Isabel Gauthier and Michael Tarr are the two main proponents of the view that face recognition involves discrimination against similar object experts (See Perspecual Expertise Network). Other scientists, particularly Nancy Kanwisher and his colleagues, argue that face recognition involves a face-specific process and which is not recruited by expert discrimination in other object classes (see domain specificity).

Studies by Gauthier have shown that the area of ​​the brain known as the fusiform gyrus (sometimes called the facial area of ​​the fusiform as it is active during face recognition) is also active when study participants are asked to distinguish between different types of birds and cars, and even when participants become experts in differentiating computers that produce unreasonable shapes known as greebles. This suggests that the fusiform gyrus may have a common role in the recognition of similar visual objects. Yaoda Xu, then a post-doctoral fellow with Nancy Kanwisher, replicated the study of car and bird expertise using a better fMRI design that is less vulnerable to account attention.

The activity found by Gauthier when a participant sees a non-face object is not as strong as participants see a face, but this may be because we have more facial skills than for most other objects. Moreover, not all of the findings of this study have been successfully imitated, for example, other research groups using different research designs have found that the fusiform gyrus specifically for the face and other nearby areas deals with non-face objects.

However, failure to replicate this is difficult to interpret, since research varies on too many aspects of the method. It has been argued that some studies test experts with objects that are slightly outside the domain of their expertise. More importantly, failure to replicate is a zero effect and can occur for various reasons. Instead, each replication adds a lot of weight to the particular argument. With regard to the "specific face" effect in neuroimaging, there are now some replication with Greebles, with birds and cars, and two unpublished studies with chess experts.

Although it is sometimes found that FFA recruiting skills (eg as hypothesized by supporters of this view in the preceding paragraph), the more general and less controversial finding is that expertise leads to the focal-selectivity categories in fusiform gyrus - similar patterns in terms of antecedent factors and specificity nerves for the looks for the face. Thus, an open question remains whether facial recognition and object-level recognition experts recruit the same neural mechanisms in various subregional fusiforms or whether both domains literally share the same neural substrate. In addition, at least one study has argued that the problem of whether a field categorized as a field of expertise-predicate overlaps with FFA does not make sense as some FFA measurements in individual individuals often overlap no more with each other than taking FFA measurements and skill-controlled areas. At the same time, many studies fail to replicate it altogether. For example, four published fMRI studies have asked whether expertise has a special relationship with FFA in particular, by testing for skill effects both in the FFA and adjacent but not selective-face areas called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al., VR 2006). In all four studies, the effect of expertise was significantly stronger in the LOC than in FFA, and indeed the effect of expertise was only a significant limit in FFA in the two studies, while the effect was strong and significant in the LOC in all four studies.

Therefore, it remains unclear in situations where the fusiform gusus becomes active, although it is certain that facial recognition is highly dependent on this area and the damage can cause severe facial recognition disorders.

Self-perception

The study of facial perception also specifically sees self-perception. One study found that facial perception/recognition itself was unaffected by context changes, while perceptions of recognizable and unknown faces were negatively affected. Another study focusing on older adults found that they had self-profits in configural processing but not featural processing.

Science Shows How Faces Guide, and Reflect, Our Social Lives ...
src: www.psychologicalscience.org


Facial benefits in memory recall

During facial perception, neural networks make connections with the brain to recall memory. According to the Seminal Model of face perception, there are three stages of facial processing including facial recognition, recalling memories and information associated with that face, and finally remembering names. There are however exceptions to this order. For example, names are called faster than semantic information in cases of very intimate stimuli. While the face is a strong identifier of the individual, the voice also helps in the recognition of the person and is the identifier for the important information.

Studies have been conducted to see whether faces or sounds make it easier to identify individuals and recall semantic memory and episodic memory. This experiment looks at all three facial processing steps. The experimental method is to show two groups of known celebrities and faces or sounds with inter-group design and ask participants to memorize information about them. The participants were first asked whether the stimulus was familiar. If they say yes then they are asked for information (memory semantics) and memories they have about the person (episodic memory) that matches the face or sound presented. These experiments all show the strong phenomenon of face benefits and how they persist through different follow-up studies with different controls and experiment variables.

Introduction performance issues

After the first experiment on the advantage of the upper face of the sound in the memory memory, errors and gaps found in the method used. For one, there is no clear face advantage for the facial recognition phase. The participants showed a familiar response only to the sound more often than the face. In other words, when the voices are recognized (about 60-70% of the time), they are much more difficult to remember biographical information but are very good to acknowledge. The result is seen as remember versus knowing judgment. More recall (or intimacy) results occur with sound, and more responses (or memories of memory) occur with the face. This phenomenon continues through experiments dealing with criminal prison in prisons. Witnesses are more likely to say that the voice of the suspect sounds familiar than his face even though they can not remember anything about the suspect. This difference is due to the larger number of guesses and false alarms that occur with the sound.

To give a face a similar ambiguity to sound, facial stimuli become blurred in follow-up experiments. This trial follows the same procedure as the first, featuring two groups with sets of stimuli consisting of half the faces of celebrities and half the unknown face. The only difference is the blurred facial stimulation so the detail feature can not be seen. Participants are then asked to say whether they recognize the person, whether they can remember certain biographical information about them, and finally if they know the person's name. The result is completely different from the original experiment, supporting the view that there is a problem in the first experimental method. According to follow-up results, the same amount of information and memory can be remembered through voice and face, dismantling facial benefits. However, these results are flawed and premature because other methodological problems in the experiment still need to be improved.

Speech content

The process of controlling the content of word extract has proved more difficult than the removal of non-face cues in the photo. Thus experimental findings that do not control this factor lead to misleading conclusions about voice recognition of facial recognition. For example, in an experiment it was found that 40% of the time attendees could easily pair-vote celebrities with their work just by guessing. To eliminate these errors, the researchers removed parts of the sample of sound that might provide clues to the target identity, such as the slogan. Even after controlling sound samples and face samples (using blurred faces), research has shown that semantic information can be more accessible when individuals recognize faces rather than sounds.

Another technique for controlling the content of speech extracts is to present the faces and sounds of individually known individuals, such as teachers or neighboring participants, rather than the faces and sounds of celebrities. In this way the same words are used for speech extracts. For example, a known target is required to read exactly the same speech for the sound extract. The results show again that semantic information is more easily taken when individuals recognize faces rather than sounds.

Exposed frequency problem

Another factor that must be controlled for reliable results is the frequency of exposure. If we take celebrity samples, people are more often exposed to celebrity faces than their voices because of the mass media. Through magazines, newspapers, and the Internet, people are exposed to the faces of celebrities without their everyday voices rather than their voices without their faces. Thus, one can argue that for all experiments performed until now, the findings were the result of the frequency of exposure to the faces of celebrities rather than their voices.

To solve this problem, researchers decided to use a personal known as a stimulus rather than a celebrity. Individuals who are personally intimate, such as participant teachers, are mostly heard and seen. Studies using this type of control also show facial benefits. Students can take semantic information more easily when recognizing the faces of their teachers (both normal and blurred) instead of their voices.

However, researchers over the years have found a more effective way to control not only the frequency of exposure but also the content of speech extracts, associative learning paradigms. Participants are requested to associate semantic and name information with an experimentally unknown voice and face. In the current experiment using this paradigm, names and professions are given together with, thus, sound, face or both for the three groups of participants. The associations described above are repeated four times. The next step is the task of recalling where each stimulus learned in the previous phase was introduced and participants were asked to recount the profession and name for each stimulus. Again, the results show that semantic information can be more accessible when individuals recognize faces rather than sounds even when exposure frequencies are controlled.

Extensions to episodic memory and explanations for existence

Episodic memory is our ability to remember specific events previously experienced. In facial recognition due to episodic memory, there has been evidence of activation in the left lateral prefrontal cortex, parietal lobe, and left front/left anterior cingulate cortex. It was also found that the left lateralization during episodic memory retrieval in the parietal cortex strongly correlated with success in the taking. This may be due to the hypothesis that the relationship between face recognition and episodic memory is stronger than sound and episodic memory. This hypothesis can also be supported by the presence of special facial recognition devices that are thought to be located in the temporal lobes. There is also evidence of two separate nervous systems for facial recognition: one for the familiar face and the other for the newly studied face. One explanation for the relationship between face recognition and episodic memory is that because face recognition is a major part of human existence, the brain creates a relationship between the two to better communicate with others.

Bird Brains Show Similar Facial Recognition Abilities and ...
src: bigkingken.files.wordpress.com


Ethnicity

The distinction of self-racial confession versus other races and perceptual discrimination was first researched in 1914. Humans tend to see people of other races than themselves for all alike:

Other things are considered equal, individuals of a certain race can be distinguished from each other in the proportion of our familiarity, with our contact with the race as a whole. So, for Americans who do not know themselves, all Asians look alike, while for the Asiatics, all white men look the same.

This phenomenon is known primarily as a cross-race effect, but also called a self-racial effect, other racial effects, self-racial bias or inter-racial facial recognition deficits.

In 1990, Mullen reported finding evidence that other racial effects were greater among white subjects than among African American subjects, while Brigham and Williamson (1979, cited in Shepherd, 1981) obtained opposite patterns. However, it should be noted that it is difficult to measure the real effects of cross-race effects. D. Stephen Lindsay and colleagues note that the results in this study may be due to intrinsic difficulty in recognizing faces presented, a marked difference in the size of the cross-racial effects between the two test groups, or some combination of these two factors. Shepherd reviewed research that found better performance on African and White faces, but Shepherd also reviewed other studies where no differences were found. Overall, Shepherd reported a reliable positive correlation between effect size and the number of interaction subjects with other racial members. This correlation reflects the fact that African American subjects, who appear equally well on the faces of both races in Shepherd's study, almost always respond with the highest possible ranking of the number of interactions with whites (M = 4.75; 5 is the most interaction with people from races, 1 being the smallest), while their white counterparts showed greater racial effects and reported fewer other race interactions (M = 2.13). The differences in these ratings are found to be statistically reliable, Ã, Â £ (30) = 7.86, p & lt;.01.

The effects of cross race seem to appear in humans around the age of 6 months. The effects of cross race can be changed from childhood to adulthood through interaction with people of other races. Other racial experiences in racial-face processing are the main effects on cross-race effects (O'Toole et al., 1991; Slone et al., 2000; Walker & Tanaka, 2003). In a series of studies, Walker and colleagues revealed that participants with larger racing experiences were consistently more accurate in differentiating between other races compared to participants with fewer other race experiences (Walker & Tanaka, 2003; Walker & Hewstone , 2006a, b; 2007). Many of today's models of cross-race effects assume that a holistic face processing mechanism is more fully involved when viewing the race's own face compared to other racial faces.

The racial effect itself seems to be related to the enhancement of the ability to extract information about the spatial relationships among facial features. Daniel T. Levin writes that deficits occur when looking at people from other races because the visual information that determines the race takes mental attention at the expense of individuation information when recognizing the faces of other races. Further research using perceptual tasks can explain the specific cognitive processes involved in other racial effects. Bernstein et al. (2007) suggests that racial effects alone may extend beyond racial membership into groups into versus out-group concepts. It shows that categorizing a person by a university he attends the results in the same outcome compared to studies on the effects of his own race. Hugenberg, Miller, and Claypool (2007) explain about overcoming the effects of the race itself. They conducted a study in which they introduced people with the concept of a race effect themselves before presenting their faces and found that if people were awakened to their own race effects before the experiment, test subjects showed significantly less if any effect-run.

Studies in adults also show gender differences in facial recognition. Men tend to recognize fewer women's faces than women, whereas there is no gender difference associated with a man's face.

Proceedings of the Royal Society of London B: Biological Sciences
src: rspb.royalsocietypublishing.org


In individuals with autism spectrum disorder

Autism spectrum disorder (ASD) is a disorder of nervous development that produces many deficits including social deficits, communicative, and perceptual. Interestingly, individuals with autism show difficulties in various aspects of facial perception, including recognition of facial identity and recognition of emotional expression. This deficit is thought to be a product of an abnormality that occurs both in the early and late stages of the facial process.

Speed ​​and method

People with ASD process facial and non-facial stimuli at the same rate. In a specially developed individual, there is a preference for facial processing, resulting in faster processing speed compared to non-facial stimuli. These people mainly use a holistic process when looking at the face. By contrast, individuals with ASD use part-based or bottom-up processing, focusing on individual features rather than the overall face. When focusing on the individual parts of the face, people with ASD direct their views especially to the bottom of the face, especially the mouth, varying from the trained eyebrows of those who normally develop. The deviation from this holistic face processing does not use a face prototype, which is a template stored in memory that makes retrieval easier.

In addition, individuals with ASD screen difficulties with memory recognition, particularly memory that helps in identifying faces. The memory deficit is selective for the face and does not extend to other objects or visual inputs. Some evidence provides support to the theory that this facial memory deficit is an interference product between face facial processing connections.

Related difficulties

Atypical facial-facing processing styles of people with ASD often manifest in limited social ability, due to decreased eye contact, mutual concern, interpretation of emotional expression, and communicative skills. This deficiency can be seen in infants as young as 9 months; especially in terms of poor eye contact and difficulty engaging in mutual concern. Some experts have even used the term 'face evasion' to describe the phenomenon in which infants later diagnosed with ASD tend to attend non-facial objects above the face. In addition, some have suggested that the disorder shown in children with ASD's ability to understand the emotional content of the face is not a reflection of the inability to process emotional information, but rather, the result of not being common attention to facial expression. The boundaries of this process that are important for the development of communicative and socio-cognitive abilities are seen as the cause of the disruption of social engagement and responsiveness. Furthermore, studies show that there is a relationship between decreased facial processing ability in individuals with ASD and then deficits in Theory of Mind; for example, while normally grown individuals are able to connect other people's emotional expressions with their actions, individuals with ASD do not exhibit these skills at the same level.

There are several opinions about this cause, however, resembling a chicken or egg dispute. Others theorize that social disorder leads to perceptual problems rather than vice versa. In this perspective, the lack of biological social interest attached to ASD inhibits the development of facial recognition and perception processes due to underutilization. Further research is needed to determine which theory is best supported.

Neurology

Many of the obstacles facing individuals with ASD in terms of facial treatment can be derived from abnormalities in the facial areas of the fusiform and amygdala, which have been shown to be important in facial perceptions as discussed above. Usually, the fusiform facial area in an individual with an ASD has reduced the volume compared to a person who normally develops. This volume reduction has been attributed to the deviant amygdala activity that does not mark the face as emotionally prominent and thus reduces the activation rate of the fusiform facial area. This hypoactivity in the fusiform facial region has been found in several studies.

The study is not conclusive about the area of ​​the brain where people with ASD are used instead. One study found that, when looking at the face, people with ASD exhibition activities in brain regions are usually active when usually developing an individual feel object. Another study found that during facial perception, people with ASD used different nervous systems, with each of them using their own unique neural circuits.

The compensation mechanism

As with ASD individual ages, scores on behavioral tests assessing the ability to perform facial emotional recognition increase to the same level as controls. However, it is clear that the recognition mechanisms of these individuals are still not typical, although often effective. In terms of face identity recognition, compensation can take many forms including pattern-based strategies that are first seen in face inversion tasks. Or, evidence suggests that older individuals compensate by using the mimicry of other people's facial expressions and relying on their motor feedback from facial muscles for facial recognition. These strategies help to overcome the obstacles that individuals face with ASD in interacting in a social context.

The distributed human neural system for face perception - ppt ...
src: slideplayer.com


In individuals with schizophrenia

Attention, perception, memory, learning, processing, reasoning, and problem solving are known to be affected in individuals with schizophrenia. Schizophrenia has been associated with impaired facial and emotional perception. People with schizophrenia exhibit poorer accuracy and slower response times in facial perception tasks where they are asked to match faces, memorize faces, and recognize emotions present in the face. People with schizophrenia have greater difficulty to match upright faces compared with face upside down. Reduction of configural processing, using distances between feature items for recognition or identification (eg facial features such as eyes or nose), has also been linked to schizophrenia. Schizophrenic patients can easily identify "happy" influences but struggle to identify faces as "sad" or "fearful". Disorders in facial and emotional perceptions are associated with impairments in social skills, due to the inability of individuals to distinguish facial emotions. People with schizophrenia tend to show diminished N170 responses, atypical facial scan patterns, and configural processing dysfunction. The severity of schizophrenia symptoms has been found to be correlated with the severity of the disturbance in facial perception.

Individuals with schizophrenia diagnosed and antisocial personality disorder have been found to have more disorders in facial and emotional perception than individuals with only schizophrenia. These individuals struggle to identify anger, surprise, and disgust. There is a relationship between aggression and emotional perception difficulties for people with this double diagnosis.

Data from magnetic resonance imaging and functional magnetic resonance imaging have shown that smaller volumes of the fusiform gyrus are associated with greater disturbance in facial perception.

There is a positive correlation between self-recognition and other facial recognition difficulties in individuals with schizophrenia. Schizotypy levels have also been shown to be correlated with self-face difficulties, unusual difficulty of perception, and other facial recognition difficulties. Schizophrenic patients report more strange feelings when looking in the mirror than in normal control. Hallucinations, somatic worries, and depression have all been found to be associated with difficult self-perception.

The distributed human neural system for face perception - ppt ...
src: slideplayer.com


In animals

Neurobiologist Jenny Morton and his team have been able to teach sheep to choose a familiar face on top of the unknown when presented with two photographs, which has led to the discovery that sheep can recognize human faces. Archerfish (distant human relative) is able to distinguish between forty-four different human faces, which support the theory that there is no need for neocortical or intelligent human face history to do so. Pigeons are found using the same part of the brain as humans do to distinguish between a happy and neutral face or a male and female face.

The distributed human neural system for face perception - ppt ...
src: slideplayer.com


Artificial

A lot of effort has been put into the development of software that can recognize human faces. Much of the work has been done by the branch of artificial intelligence known as computer vision that uses the findings of facial psychology to inform the design of software. Recent breakthroughs using non-invasive functional transcranial spectroscopy as demonstrated by Njemanze, 2007, to find specific responses to facial stimuli have led to system improvements for face recognition. The new system using input responses is called long-term cortical potential (CLTP) derived from Fourier analysis of mean blood flow velocity to trigger a target facial search from a computerized face database system. Such systems provide a brain-machine interface for face recognition, and this method has been referred to as cognitive biometrics.

Another interesting application is the approximate human age of the face image. As an important clue to human communication, facial images contain a wealth of useful information including gender, expression, age, etc. Unfortunately, compared to other cognitive problems, the age estimate of the face image is still very challenging. This is mainly because the aging process is affected not only by one's genes but also many external factors. Physical condition, lifestyle etc. can accelerate or slow down the aging process. In addition, due to the slow aging process and with long duration, collecting enough data for training is quite demanding work.

Face the facts: Neural integration transforms unconscious face ...
src: 3c1703fe8d.site.internapcdn.net


Genetically

Source of the article : Wikipedia

Comments
0 Comments