When we meet someone our response depends on whether we know who they are. Even anonymous faces contain information that provides quick clues to a range of factors, from honesty (Fenske et al., 2005) to aggression (Lefevre and Lewis, 2014). When we know a person, presentations stored in long-term memory are also opened, providing access to information that can mimic instantaneous impressions. While early models of facial processing focused heavily on semantic memory as a source of human information (e.g., Bruce and Young, 1986), recent neuroanatomical accounts highlighted the added importance of episodic memory (Gobbini and Haxby, 2007). This combination of facial models and memory leaves an interesting question - how does the memory of the episode contribute to human knowledge? To address this issue we present a diagnostic study using the neural marker of episodic memory. Before describing our research, let us briefly introduce the key features of facial processing models, retrieval processes that support episodic memory and brain signals that can be used to study.
The experience of seeing a face but not being able to identify a person is common and has revived ideas about how a person's identity is achieved, in both facial recognition (Bruce and Young, 1986) and episodic fields (Mandler, 1980). Common to both phases of the model is the idea that recognition and identification are supported by different processes. Model perceptual models (e.g., Green et al., 2000; Bruce and Young, 1986; Burton et al., 1990) come together in the view that facial recognition occurs when incoming sensory information is compared to a different memory representation, and that human identification occurs when cognitive information biography is restored. Coherent neuroanatomical models (Gobbini and Haxby, 2007, Haxby et al., 2000) describe the basic system involved in visual acuity analysis (supporting recognition) and the extended system involved in retrieving human information (supporting identification). In essence, the extended system also explicitly incorporates episode memory as part of human knowledge (see Ferreira et al. (2015) and Lundstrom et al. (2005)) as well as semantic presentations. Facial models may not explain how episodic memory contributes to human knowledge.
Episodic memory models describe two recovery processes: memory and familiarity (Mandler, 1980, Jacoby and Dallas, 1981, Tulving, 1985, Yonelinas, 1994). Memorization involves the acquisition of contextual information that is present in coding, while familiarity merely reflects a past experience. These two recovery processes are differentiated for a number of reasons, including their different sensitivity to experimental actions (see Yonelinas, 2002) and different forgetfulness patterns (Sadeh et al., 2016). The purpose of the current investigation is to determine whether the memory of the episode contributes to a person's knowledge of memory or acquaintance. Importantly, both recovery processes are associated with distinct brain signals. Powerful Related Articles (ERPs) have been widely used to investigate the ability to distinguish between newly learned and unlearned motives. ERP findings provide strong evidence for two memory recognition processes (Rugg and Curran, 2007). Studies that mainly use dictionary motives have identified ERP components of familiarity and memory, middle and left parietal effects old / new, respectively. However, this common model is challenged in two areas from claims that the midfrontal effect actually indicates the onset of the concept (Voss et al., 2010) and that abnormal facial memory brings a preliminary effect (MacKenzie and Donaldson, 2007, MacKenzie and Donaldson, 2009, Galli and -Otten, 2011). Importantly, the current study examines the memory of prominent faces, which has been shown to find the typical left parietal effect (Nie et al., 2014). In this context ERP provides robust methods for measuring the contribution of episode recovery. In addition, high-resolution ERP modifications can help to differentiate what is thought to be happening in a series, such as facial recognition and personal identification.
Two popular facial recognition tests are described below. In each test, a series of faces were shown to participants, who selected each as normal, identified or anonymous. A typical face has been identified but could not be identified, while face recognition requires retrieval of personal information, such as a person's name or occupation. These response options are inspired by Tulving's (1985) process Remember / Know, in which Remember and Know responses provide clues to recall and familiarity, respectively. The Remember / Know process has been used to investigate whether semantic memories have autobiographical content in behavioral studies investigating popular words (Westmacott and Moscovitch, 2003) and popular faces (Damjanovic and Hanley, 2007). Here we use a modified version of the Tulving process, combined with ERP standards for retrieval processing, to identify how process retrieval processes (memory and / or familiarity) support facial recognition. According to the Gobbini and Haxby model (2007), episodic memory supports the identification of a person with an extended system but not facial recognition through the central system. Thus, brain signals associated with episode retrieval processes - memory or familiarity - should be noted only on the identified face and not the face seen without identification. An important question is what two brain signals are connected to the retrieval of the episode to be seen.
Materials and methods
The experimental design and procedures are in line with the themes of the Helsinki Declaration and have been approved by the University of Stirling Psychology Ethics Committee. Twenty-eight right-handed participants reported having a normal or adjusted to normal, and received £ 5 an hour. The sample size was determined by taking into account the standard sample size of the recognition functions using the EEG reported in the literature. Data from 8 participants were discarded due to insufficient number of responses in one or more test cases or EEG contamination by artifact. Data from the remaining 20 participants (13 women) aged 21 (range: 18-31) were used to generate intermediate ERPs reported here.
The face is shown on a 17 ″ LCD monitor; incentives were introduced and behavioral data was recorded via E-Prime (Intelligent Software Tools; www.pstnet.com). Participants sat in a chair about a meter from the monitor, with a box of buttons on their desk in front of them. All faces were celebrities chosen to be seen by a group of graduate students at Stirling University. These celebrities include actors (e.g., Jennifer Aniston, Al Pacino), artists (e.g., Kylie, David Bowie), politicians (e.g., Hillary Clinton, Alex Salmond), television characters (e.g., Oprah, Terry Wogan) and members of the group. The British royal family. The full list of ownership has been selected for the purpose of capturing spectrum from celebrities to lesser-known people. Face images were taken in an online photo search. All the images were cut hair and placed against a black background, before being resized and placed in the center of the display. The face reduced the horizontal visual angle of 2 ° and the vertical visual angle of 5 °.
Gray images of 200 different symbols were introduced as a result of 4 blocks on 50 faces. Each face appeared in the center of the screen for 500 msec and was followed by a blank screen, in which participants made one of three responses: identification, normal, or anonymous. Participants were instructed to make a positive response when they saw the face and could get specific personal information about the person (such as their name, or the name of the actor they played, or the film they were in) which would be enough to see them. Normal response was required if the face was visible but the person was invisible; finally, an anonymous response was needed in cases where the face could not be seen. After the identification response, the visual information asked the participant to identify the person verbally. Any tests in which participants were unable to retrieve any facial-related information were not included in the analysis. The tester presses a button to initiate the next test. In contrast, following a common or unknown response participant button initiated the next experiment.
The EEG was recorded from 62 electrodes embedded in a flexible cap (Neuromedical Supplies: http://www.neuro.com). Electrode positions were based on the extended International 10-20 program (Jasper, 1958). All channels were referred to an electrode placed between CZ and CPZ; two other electrodes are placed in the mastoid process. Muscle function associated with blinking and eye movement was recorded on electrodes placed above and below the left eye and temples. Data were recorded and analyzed using Scan 4.3 software (http://www.neuro.com). Disruption was less than 5 kΩ before the recording started. Data was below the band range filtered between 0.1 and 40 Hz and sampled every 4 msec. The EEG is divided into 1100msec epoches, including a 100 msec pre-stimulus interval. Epoch was temporarily detained at the start of the renewal instead of responding to participants due to the interest in accessing memory presentations instead of decision-making processes or vehicle repairs. The difference in response time in all cases in the memory memory recognition may have been due to decision-making processes than any delay in accessing monemonic information (Dewhurst et al., 2006). Therefore, Stimulus locked ERPs allow for consideration of how motive processing may differ and can be interpreted by considering any variance in response time across all experimental conditions. Flammable materials were removed using a rectification process (Semlitsch et al., 1986), and voltages were adjusted by subtracting the voltage across the pre-renewable zone from each point in this period. Tests are not included in the measurement if the drift exceeds ± 50 µV (measuring the difference between the first and last data points in a period) or when the activity on any EEG channels at any time during the period exceeds ± 100 µV. The data was also redirected to the internet to recreate the central mastoid index. The waves are smoothed over the 5-point kernel. In order to improve the signal-to-audio ratio, a minimum of 16 artifact trials for each condition were set as a condition before each participant data was incorporated into intermediate ERPs.
Medium wavelengths are measured by combining the amplitude in the middle of two consecutive delays: from 300 to 500 msec and 500 to 800 msec. Data were initially analyzed using three ANOVA methods with conditional (normal / pointing / unknown), location (anterior / parietal) and hemisphere (left / right) before systematic comparisons were made between normal / unknown and identifiable / normal separately. . The ANOVA model limits the electrode characteristics to two levels to avoid potential violations of sphericity (see Dien and Santuzzi (2005)). Some of the electrodes used in the analysis were: F3, F4, P3 and P4. Only the main impacts and interactions that include the aspect of the situation are of interest to you in theory and therefore only these statistics will be reported. The main effect of the situation was analyzed by two adjusted Bonferroni comparisons
Post a Comment