Cognitive and Research

Report – Sample

Section one

Cognitive : Models of Memory

Section two

Experimental methods in Cognitive

Introduction

One of the most significant contributions to our understanding of human memory has been the development of the multi-store model by Atkinson and Shiffrin (1968), which signalled a move into considering memory in terms of physical structures and clear processes. Based on scientific, experimental evidence, the multi-store model provides a description of memory consisting of a series of separate stores that process information in a linear fashion. However, despite being the first of its kind, the multi-store model has been criticised as being too simplistic, and not reflective of the complexity of human memory.
The use of the laboratory experimental method in many of the supporting studies attracts both praise and criticism in equal measures; on one hand the methodology benefits from being objective and scientific, but on the other, there are arguments against their usefulness in studying memory in realistic and ecologically valid ways.
The following report will consider the effectiveness of the multi-store model of memory, using the working memory model and the levels of processing to challenge some of the assumptions made by Atkinson and Shiffrin. It will also consider the validity of the experimental method, and discuss the implications of relying on laboratory experiments to test memory.

Section one

A critical discussion of the models of memory

Using the information processing approach, Atkinson and Shiffrin (1968) developed the first structural model of memory: the multi-store model. This consisted of three separate, unitary stores, through which information is encoded and passed from one store to the next via attention and rehearsal. The model operates on a fixed sequence basis, with all data first entering the sensory memory (SM), where, if attention is paid, it will pass into the short-term memory (STM). The short-term memory is a limited store which holds only a few items for up to thirty seconds before information decays, and which relies on rehearsal to both extend the duration and to pass information on to the larger long-term store. Long term memory (LTM) offers more permanent storage and has no known limit to its capacity.
The multi-store model is therefore based on a number of claims: firstly, each store is separate to the others; secondly, each store is unitary; thirdly, the sequence is fixed and information must pass through each store before reaching the next; and finally, information must be rehearsed in order to be passed into long-term memory.
Evidence to support the multi-store model largely comes from studies that have investigated the differences between STM and LTM. Glanzer and Cunitz (1966) demonstrated a clear distinction between the two stores in their serial position experiment, in which they found that in an immediate recall test, participants could best recall words from both the beginning and end of a list, known as the primacy and recency effect. They explained this in terms of the first words being rehearsed into LTM, and therefore they were easily remembered in both immediate and delayed recall conditions. In delayed recall, the duration of the STM was exceeded, leading to rapid decay of the information, therefore the recency effect was lost, but this had no effect on the LTM, thus the primacy effect was unaffected.
Further evidence to support the separate stores comes from Peterson and Peterson’s (1959) study of the duration of STM. When rehearsal of the consonant trigram was prevented, participants showed rapid forgetting of information; after 3 seconds participants were able to recall around 90% of trigrams, but less than 10% after a delay of 18 seconds, concluding that the duration of STM is limited to approximately 18 seconds. This contrasts sharply with Bahrick et al’s. (1975) study of the duration of LTM which found even after 48 years, participants were around 70% accurate when matching names to faces of people from their graduating year. This demonstrates that provided appropriate cues are available, memories can potentially last a lifetime, thus LTM has an unlimited duration. If the duration of each store differs from the others, then this must suggest that they are in fact separate stores.
Studies using brain scanning techniques have also provided evidence of separate stores; Beardsley (1997) found that the prefrontal cortex is active during STM tasks, whereas Squire et al. (1992) found that the hippocampus is active when LTM is engaged.
The linear sequence of the model suggests that information passes in only one direction, and therefore SMT is involved before LTM. However, Logie (1999) has argued that STM relies on LTM, and Ruchkin et al. demonstrated there was greater brain activity when participants were asked to process real words compared to pseudo-words, because the STM was accessing the meaning of real words in LTM. Therefore, if STM relies on LTM, then the process must be more interactive than linear, and LTM must be involved much earlier than suggested.
Further criticism comes from Baddeley and Hitch’s (1974) working memory model (WMM); whilst they agree with the claim that STM and LTM are separate stores, they disagree with the MSM that they are unitary stores. Unlike the MSM, the working memory model focuses only on STM, describing this as an active, multi-component store which better explains how we can do two things at once, something not addressed by the MSM.
The WMM contains four separate components, each of which has a specific function. The central executive acts as the controller for the model, choosing which stimuli to pay attention to and allocating tasks to each of the slave systems. Whilst it has very limited capacity and no storage facility, it is modality free and primarily involved in decision-making, reasoning and planning tasks, and monitoring the progress of the other systems.
The phonological loop is a modality specific slave system that deals with sound and speech- based information. It is responsible for processing what we hear and what we say, preserving the order of the words in a loop to allow for sub-vocal repetition. It has limited capacity, which according to Baddeley et al. (1975) is limited to the amount of information that can be said in two seconds. The visuo-spatial sketchpad is a second, limited capacity, modality specific slave system, that deals specifically with visual and spatial information such as pattern recognition, form, colour and the spatial arrangement of objects in the visual field.
The final component, the episodic buffer, was added by Baddeley (2000) more recently to help explain the temporary storage of both visual and acoustic information. This is a general storage facility within the model, which integrates information from the central executive and slave systems, whilst maintaining time sequencing during the completion of tasks, before sending information back to the central executive and on to the LTM.
Bunge et al. (2000) provided evidence for the existence of the central executive by using fMRI scanning to monitor the level of brain activity during tasks. The found the same areas of the brain were active in both single and dual-task conditions, but there was a greater level of activity in the dual-task condition, which reflected the increased attentional demands on the central executive. Braver et al. (1997) gave participants tasks that involved the central executive while having brain scans, and found greater activity in the prefrontal cortex which increased as the task became harder. This implied that the central executive would have had to work harder to fulfil its function during the more difficult task, which accounted for the increased activity.
Baddeley et al. (1975) found that participants had more difficulty when doing two visual tasks simultaneously, such as tracking a light whilst describing the letter F, than when doing a visual and verbal task at the same time. They explained that when a slave system is required to do two tasks simultaneously, both tasks will compete for the slave system’s limited resources, whereas when two tasks require the use of both slave systems, there is no interference.
The WMM has been given much support from clinical case studies, for example, Shallice and Warrington (1970) reported on the case of KF; following brain injury, KF was left with a poor STM for verbal information, but no difficulty with visual information. This suggests that his phonological loop had been damaged, whilst other parts of memory had been left intact. Similarly, Farah, et al. (1988) provided evidence of a further sub-division within the visuo-spatial sketchpad in the case of LH, who was able to perform better on spatial tasks than visual tasks following a road accident.
Undoubtedly there are issues with using case studies, particularly those involving brain damage, however, these do provide compelling evidence that the STM has multiple stores. Therefore, Baddeley and Hitch’s criticism of the MSM is supported, as the STM cannot be a unitary store. Interestingly, the case study of KF supports both the MSM and WMM in respect to the claim that STM and LTM are separate stores, as KF had no issues with LTM.
The WMM is generally considered to be an advancement over the MSM, providing a better description of the verbal rehearsal process as an option rather than a requirement for memory, whilst also demonstrating the active nature of STM. It explains how different types of information are processed separately within the STM, which clarifies why some people have problems with verbal information, but not visual, and vice versa. It has also been particularly helpful in the development of diagnostic and screening tools that can be used to help identify and support children with more specific learning needs, eg, problems with reading and maths (Gathercole et al, 2003).
What neither of these models adequately addresses however, is the type of processing that is involved in memory. The levels of processing model (LOP) developed by Craik and Lockhart (1972) moves away from the idea of rigid structures and offers an alternative view of memory. Unlike the previous approaches, the LOP model is more concerned with how the original information is processed, thereby describing memory as being a by-product of the level of processing. Craik and Lockhart take particular issue with the multi-store model’s emphasis and reliance on simple repetition as the key to long-term memory, and they suggest that deeper, more meaningful processing leads to long-term retention.
Craik and Lockhart (1972) describe three levels of processing, each of which will determine the strength of the memory trace: visual (structural) processing is the shallowest level, and leads to the weakest memory trace, whilst acoustic (sound based) is slightly deeper, it is also considered a shallow level. The deepest level of processing requires making information more meaningful, therefore deep processing is semantic in nature.
Craik and Tulving (1975) tested the LOP theory by presenting participants with a series of words, and asking them to answer questions about the word that would require either shallow (visual/acoustic) or deep (semantic) processing. Some participants were told that they would be tested on their recall of the words (intentional learners), whilst others were not (incidental learners). In the follow-up recognition test, it was found that participants recognised significantly more semantically processed words, than those processed at a visual or acoustic level, regardless of whether they were intentional or incidental learners. This suggests that processing for meaning improves memory.
Hyde and Jenkins (1973) used a similar procedure in their study; using both intentional and incidental learners, participants were presented auditorily with a list of words, followed by an orienting task that either required deep or shallow processing. Whilst they found no significant differences in performance between the intentional and incidental learners, they did find that the questions that required semantic processing produced the highest rate of recall, which supports the levels of processing theory. They concluded then, that it is not the intention to learn that is the critical factor in remembering information, but rather the nature of the processing that determines memorability.
Palmere et al. (1983) successfully demonstrated that elaboration is an important feature in processing information. In their study they gave participants a fictional story which contained thirty-two paragraphs, divided into groups of eight; each group contained a sentence with a key theme, and then varying amount of supplementary detail. They found that recall varied as a function of the amount of elaboration, with the highest level of recall from the most elaborated paragraphs. Therefore, whilst both the LOP and the MSM suggest that rehearsal is important, Palmere’s study has shown that elaborative rehearsal is more effective than simple rote repetition, which supports the view of the LOP.
The levels of processing model has been influential, but has also been criticised on the grounds of circularity; with no definition of depth being given, it has been argued that the model fails to fully explain why deeper processing should lead to better recall. Tyler et al. (1979) suggested that the model has confused time and effort with depth, whilst Morris et al. (1977) argued that the optimum level of processing depends on the nature of the task being done.

Section Two

Experimental methods in Cognitive

Most of the research into memory has been conducted using the laboratory experimental method. The key features of this method are:
· The research is conducted in a controlled environment, such a laboratory;
· The independent variable (IV) is controlled and manipulated by the experimenter, thus allowing experimenter to measure its effect on the dependent variable (DV);
· Confounding and extraneous variables are held constant, and can sometimes be eliminated entirely;
· The experiment uses a standardised procedure;
· Where necessary, specialist equipment can be used, or the environment can be manipulated and engineered to the needs of the researcher.

Main strengths and weaknesses of the laboratory experimental method:

Strength

Weakness

· It is the only method to infer cause and effect, due to the level of control;
· The controlled procedure ensures high internal validity;
· The standardised procedure is easy to replicate therefore, reliability of results can be tested/retested;
· Has a high level of objectivity;
· Generates quantitative data, which is easy to analyse and doesn’t require subjective interpretation;
· Can be used to study rare events.

· The environment is not a natural one, which can impact upon the behaviour of the participant(s), therefore the method lacks ecological validity;
· Participants typically know that they are being studied, which can increase demand characteristics;
· Often uses tasks that lack mundane realism, eg, learning lists of unconnected words is not something we do every day.

The laboratory experiment is the only method that actually allows the researcher to infer cause and effect; however, this is largely contingent on the ability to generalise the results beyond the sample actually tested. Therefore, the sampling method used will determine whether the results can be generalised, or whether they are only applicable to the sample that has actually been tested.
In research the target population consists of the people that the researcher wants to test, however, this can be very large and difficult to access. Therefore, it is often the case that the researcher will select a sample from the target population; ideally this should be a representative sample, but that will depend on the sampling method used.
In a truly random sample, all members of the target population will have an equal chance of being included in the study, but this can still result in a biased sample as important sub-groups may not be reflected in the sample actually tested. One random method that identifies the sub-groups is stratified sampling.

Stratified sampling process – random sampling method

· The researcher will identify the different strata that make up the target population;
· The proportions needed for the sample must be representative of how they appear in the target population;
· The participants selected for each stratum are selected using random sampling.

Strengths and weaknesses of stratified sampling

· No researcher bias: once target population has been sub-divided into strata, the participants are randomly selected, thus not influenced by the researcher;
· The sample is representative of the target population as it accurately reflects the composition of the population;
· Findings can be generalised to the whole of the target population;
· The identified strata cannot reflect all the ways that people are different;
· The process can take a long time, for example, the researcher will need to conduct research to identify all the various strata that exists in the target population, which can be expensive and time-consuming;
· If participants refuse to take part, then the sample could end up being more of a volunteer sample, which cannot be easily generalised.

Volunteer sampling process – non-random sampling method

· Participants select themselves to be part of the sample; thus, this is a self-selected sample;
· The researcher will advertise their study, for example, they might post an advert on a notice board, in the newspaper or on social media, asking for people to volunteer;
· Participants will then contact the researcher to apply to take part in the study.

Strengths and weaknesses of volunteer sampling

· Little effort is required as the researcher only needs to post the advert, and then participants will respond if they wish to take part:
· Less expensive – little cost to place/post advert;
· Can reach a large number of potential participants very quickly, eg, via social media;
· Less time-consuming than other methods, eg, no need to identify sub-groups/strata;
· Volunteer bias: the study may only attract people who are particularly keen, helpful or curious; this increases the risk of demand characteristics as they may try harder than other participants;
· It might attract a disproportionate number of some types of people, eg, those who are retired, students or unemployed, as they may have more free time;
· Only people who see the advert can apply, therefore, participants may come from a single location or might be known to the researcher;
· Researcher bias: the researcher might only select those who are most desirable for their study;
· The biased sample means that results are not generalisable beyond the sample tested.

Application of experimental method to memory research

The laboratory experimental method has been widely used to investigate and support the models of memory. For example, Baddeley’s (1966) study of encoding in STM and LTM, and Hyde and Jenkins (1973) study of the effects of depth of processing on word recall, both used this method. In both cases they:
· Required participants to recall word lists;
· Used controlled conditions to test memory;
· Used experimental designs and standardised procedures;
· Have been criticised on the grounds of lacking ecological validity.
However, it is also true that both studies were used to support their respective models (the MSM and the LOP), as they could both infer causality through manipulating the IV and measuring its effect on the DV. This allowed Baddeley to determine that STM relies mainly on acoustic encoding, whilst LTM primarily uses semantic encoding, thus supporting the distinction between the two stores and the claims of the MSM.
Peterson and Peterson’s (1954) study of the duration of STM also used the laboratory experimental method. Using a volunteer sample of 24 students from their university, they tested the effect of the retention interval (IV) on the recall of the trigram syllables (DV). They used a standardised procedure in which ensured that:
· The trigrams were of equal length (3 letters) and only consisted of consonants (to prevent the creation of words);
· Rehearsal was prevented by a distractor task, that was the same for all participants (counting backwards from a specified 3-digit number);
· The retention interval was consistent for participants across all levels of the IV, ie, 3, 6, 9, 12 or 18 seconds;
· Other potential confounding variables:
· Sample bias: this was a volunteer sample of students who may have known the researchers and the topic. This would increase the likelihood of demand characteristics as these participants are more likely to guess the aim.
· This could be addressed by using a random sampling method, and ensuring that participants had not studied previously
· Situational extraneous variables:
· Could be minimised by controlling the environment (eg, quiet room, free from distractions).

Conclusion

Ultimately, despite the differences between the models of memory, each has evidence to support at least some of its claims. The limitations associated with the MSM led other researchers to develop their own models, resulting in a better understanding of memory as a whole. The use of the experimental method has enabled researchers to check the reliability of findings, by testing and retesting experiments, which provides assurance that the findings are indeed accurate, and where not, further investigations can be done. Researchers are now finding more realistic and ecologically valid ways of testing memory, that more accurately reflect how memory is used in everyday life. Overall, the results of studies have led to improvements in teaching and learning, identifying and supporting learning needs, and creating useful strategies to help enhance memory.

2

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH