This study compared the effects of a computer-based stimulus equivalence protocol to a traditional lecture format in teaching single-subject experimental design concepts to undergraduate students. Participants were assigned to either an equivalence or a lecture group, and performance on a paper-and-pencil test that targeted relations among the names of experimental designs, design definitions, design graphs, and clinical vignettes was compared. Generalization of responding to novel graphs and novel clinical vignettes, as well as the emergence of a topography-based tact response after selection-based training, were evaluated for the equivalence group. Performance on the paper-and-pencil test following teaching was comparable for participants in the equivalence and lecture groups. All participants in the equivalence group showed generalization to novel graphs, and 6 participants showed generalization to novel clinical vignettes. Three of the 4 participants demonstrated the emergence of a topography-based tact response following training on the stimulus equivalence protocol.

COMPARISON OF A STIMULUS EQUIVALENCE PROTOCOL AND
TRADITIONAL LECTURE FOR TEACHING SINGLE-SUBJECT DESIGNS

SADIE LOVETT, RUTH ANNE REHFELDT, YORS GARCIA, AND JOHNNA DUNNING

SOUTHERN ILLINOIS UNIVERSITY

This study compared the effects of a computer-based stimulus equivalence protocol to a
traditional lecture format in teaching single-subject experimental design concepts to
undergraduate students. Participants were assigned to either an equivalence or a lecture group,
and performance on a paper-and-pencil test that targeted relations among the names of
experimental designs, design definitions, design graphs, and clinical vignettes was compared.
Generalization of responding to novel graphs and novel clinical vignettes, as well as the
emergence of a topography-based tact response after selection-based training, were evaluated for
the equivalence group. Performance on the paper-and-pencil test following teaching was
comparable for participants in the equivalence and lecture groups. All participants in the
equivalence group showed generalization to novel graphs, and 6 participants showed
generalization to novel clinical vignettes. Three of the 4 participants demonstrated the
emergence of a topography-based tact response following training on the stimulus equivalence
protocol.

Key words: stimulus equivalence, college students, tact, verbal behavior, topography-based
responding

_______________________________________________________________________________

Behaviorally based instructional protocols
grounded in the principles of stimulus equiva-
lence can provide instructors with an alternative
method of communicating their subject matter
of interest to students. The hallmark of stimulus
equivalence protocols is that direct training on
certain relations among instructional stimuli
will result in the emergence of untrained
relations among those stimuli (Sidman, 1994).
Early research that examined this phenomenon
focused on teaching reading comprehension
and oral reading skills to individuals with
intellectual disabilities (Sidman, 1971). For
example, Sidman and Cresson (1973) used a
stimulus equivalence protocol to teach an
individual to read three-letter words, such as
‘‘cow.’’ The learner was first trained to relate the
spoken word ‘‘cow’’ to a picture of a cow and to
relate the spoken word ‘‘cow’’ to the printed
word cow. After training, the learner was then
able to orally name the picture of the cow and

the printed word cow, and he was able to relate
the picture of the cow to the printed word cow
without a direct history of reinforcement for
relating those stimuli. According to the stimulus
equivalence paradigm, the oral naming of the
picture and printed word, which involves a
reversal of the trained relation, is referred to as
symmetry. The relation of the printed word and
the picture that had never been previously
paired is referred to as transitivity.

Instruction using the principles of stimulus
equivalence has been applied successfully in
a variety of situations. Cowley, Green, and
Braunling-McMorrow (1992) taught adults with
acquired brain injuries to relate the dictated
names of their therapists to photographs of the
therapists and to relate the dictated therapist
names to the written therapist names. Partici-
pants then related the photographs to the written
therapist names and orally named the therapists
when shown the photographs without further
training. An investigation by Lynch and Cuvo
(1995) used a stimulus equivalence protocol to
teach fifth- and sixth-grade students relations
between pictorial representations of fractions and
numerical fraction ratios and relations between

Address correspondence to Ruth Anne Rehfeldt,
Behavior Analysis and Therapy Program, Rehabilitation
Institute, Southern Illinois University, Carbondale, Illinois
62901 (e-mail: [email protected]).

doi: 10.1901/jaba.2011.44-819

JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2011, 44, 819–833 NUMBER 4 (WINTER 2011)

819

printed decimals and pictorial representations of
fractions. Following training, students related the
numerical fraction ratios to the appropriate printed
decimals. A recent study by Toussaint and Tiger
(2010) examined the use of a stimulus equivalence
protocol to teach braille literacy to children with
degenerative visual impairments. Participants who
were able to relate a spoken letter name to the
printed letter were trained to relate braille letters
and the corresponding printed letters. Participants
then were able to select the appropriate braille
letter when provided with the spoken letter name
and orally name the braille letters.

These examples of the application of stimulus
equivalence in teaching various skills highlight its
utility as an instructional method. Studies of
stimulus equivalence frequently involve training
and testing with a conditional discrimination
procedure that relies on selection-based respond-
ing, or pointing to one stimulus that is presented
in an array (Michael, 1985). The emergence of
topography-based responding, or responding in a
different topography than that trained, also has
been documented (Michael, 1985). Following
selection-based training, Cowley et al. (1992)
demonstrated the emergence of a tact response,
which is a verbal response under the control of a
nonverbal stimulus (Skinner, 1957), by showing
that participants were able to name the photo-
graphs of the therapists. Furthermore, Toussaint
and Tiger (2010) documented a topography-
based tact response by demonstrating the emer-
gence of oral naming of the braille letters. The
topography-based responding shown in these
studies may reflect the acquisition of more
meaningful verbal repertoires than selection-based
responding produces (Sundberg & Sundberg,
1990), because skills commonly targeted in
education, such as speaking and writing, require
topography-based rather than selection-based
responding. A successful educational curriculum
will result in proficiency in both spoken and
written topography-based response repertoires.

In recent years, stimulus equivalence proto-
cols have been extended to the instruction of

sophisticated learners (e.g., Fienup & Critchfield,
2010). Ninness et al. (2005, 2006) used
computer-based stimulus equivalence protocols
to teach algebraic and trigonometric mathemat-
ical functions. Participants were taught to relate
standard mathematical formulas to factored
formulas and to relate factored formulas to
graphical representations of those formulas. After
training, participants related the standard for-
mulas and the graphs without direct training,
and generalization to novel formulas and graphs
was shown. Fields et al. (2009) taught statistical
interaction concepts by training college students
to relate line graphs and textual descriptions of
interactions, textual descriptions of interactions
and labels of the interactions, and interaction
labels and definitions of each type of interaction.
The emergence of untrained relations among the
stimuli was shown, as was generalization of
responding to novel variations of the trained
stimuli. In addition, a study by Fienup, Covey,
and Critchfield (2010) used stimulus equivalence
to teach relations among regions of the brain,
anatomical locations of brain regions, and
psychological function of brain regions to college
undergraduates. Finally, Walker, Rehfeldt, and
Ninness (2010) taught college students to relate
the names, definitions, causes, and treatments for
disabilities. In addition to demonstrating the
emergence of untrained relations, this study
showed the emergence of written and vocal
topography-based responding following selec-
tion-based training.

The aforementioned research indicates that
instruction with stimulus equivalence protocols
is successful in teaching relations among
stimuli, including complex stimuli that are
important in training advanced learners. How-
ever, there has been little effort to compare the
efficacy of stimulus equivalence protocols to
standard educational practices or to assess the
social validity of the instructional method. One
exception is Fields et al. (2009), who evaluated
generalization to a paper-and-pencil posttest
following computer-based training. The use of

820 SADIE LOVETT et al.

worksheets and paper tests, as opposed to
computer-based procedures, more closely ap-
proximates the materials and procedures present
in the average classroom. Fields et al. also
included a questionnaire to assess the social
validity of the instructional method (see also
Fienup & Critchfield, 2011).

Further investigations of the efficacy and
acceptability of stimulus equivalence protocols
in instruction relative to standard educational
practices would be beneficial in determining the
utility of the method. The objectives of the
present study were thus as follows: First, we
compared the effectiveness of a stimulus
equivalence protocol to that of a lecture in
teaching undergraduate students concepts of
single-subject experimental design. The proto-
col established relations among the names of
designs, their definitions, representative graphs,
and clinical vignettes in which the use of a
design might be appropriate. Specifically, we
compared performances for the two groups of
participants on a paper-and-pencil test. Second,
we evaluated the efficacy of the stimulus
equivalence protocol by assessing generalization
of the relations to novel graphs and clinical
vignettes and the emergence of a topography-
based repertoire. Finally, we investigated the
social validity of the stimulus equivalence
protocol and the lecture by administering a
satisfaction questionnaire to participants in
both conditions at the end of the experiment.

METHOD

Participants, Setting, and Apparatus

Twenty-four undergraduate students who
were currently enrolled in a research methods
course participated in this experiment. All
participants received extra credit as compensa-
tion for participation. Sessions were conducted
in an office (3 m by 5 m) that contained two
desks, each with a chair and personal computer.
The participant was seated at one desk, and the
experimenter was seated at the second desk,
throughout the session. All procedures were

conducted on the computer, with the exception
of the paper-and-pencil quiz and social validity
survey. Sessions ranged in length from 75 to
140 min.

Equivalence Stimuli

Each of four stimulus classes contained four
stimuli that were presented during training
and two stimuli that were used to test for
generalization. Stimuli were developed based on
the definitions, graphical illustrations, and
clinical examples of single-subject designs in
an undergraduate textbook (Kennedy, 2005).
The A stimuli were the names of each of the
four basic single-subject designs. The B stimuli
were definitions of the four corresponding
designs. The C stimuli were graphs depicting
the implementation of each of the four single-
subject designs. Each graph had unique y-axis
labels (e.g., percentage aggressive behavior) and
unique intervention phase labels (e.g., rein-
forcement). The D stimuli were vignettes that
described clinical situations in which the four
corresponding designs would be appropriate for
use. All vignettes were designed such that the
length of each description was similar. The
vignettes described clinical situations that were
distinct from the behaviors and interventions
illustrated on the graphical (C) stimuli. The B
and D stimuli are presented in Figure 1 (see
also Walker & Rehfeldt, in press).

There were four C9 generalization stimuli
including one novel graphical representation of
each of the single-subject designs. The novel
graphs showed changes in behavior in the
opposite direction from that of the stimulus
used in training. For example, the C1 stimulus
depicted a graph for a withdrawal design that
showed an increase in the percentage correct
responding, and the C91 stimulus was a novel
graph of a withdrawal design that showed a
decrease in aggressive behavior. Novel graphs
also included different y-axis and phase labels.
The four D9 generalization stimuli were novel
clinical vignettes that described situations in
which each of the four designs would be

STIMULUS EQUIVALENCE PROTOCOL 821

appropriate for use. Length of the descriptions
in the vignettes was similar to that of the
vignettes used in training. Like the C9 gener-
alization stimuli, the vignettes described a
change in behavior opposite of the direction
of behavior change on the stimulus used in
training. For example, the D1 stimulus de-
scribed a situation in which a teacher wished to
decrease talking out in class, and the D91
stimulus described a situation in which self-
monitoring was used to increase the number of
math problems completed.

General Procedure

This experiment used a pretest–train–posttest
sequence and included two groups of partici-
pants. Participants in the equivalence group
were exposed to a computer-based stimulus
equivalence protocol, and participants in the
lecture group viewed a video of a lecture that
provided an overview of the four basic single-
subject designs. Participants were assigned
randomly to either the equivalence or lecture
group by the flip of a coin. Nine participants
were included in the lecture group, and 15
participants were included in the equivalence
group. The number of participants in the two
groups was unequal because several participants
assigned to the lecture group cancelled sessions.
Additional participants could not be recruited
for this group because course material had
progressed to the point of covering the single-
subject design topics targeted in this experiment
by that time in the semester. The main depen-
dent variable was performance on the paper-
and-pencil quiz, and all participants completed
a questionnaire to evaluate the social validity of
the instructional methods.

Paper-and-pencil pretest and posttest evaluation.
The paper-and-pencil quiz consisted of 15
multiple-choice questions on single-subject de-
signs. Each question included four response
options (a, b, c, and d). Questions included
adaptations of the stimuli presented in the
stimulus equivalence protocol and lecture. Four
questions tested variations of the definition-to-
design-name (B-A) relations by requiring partic-
ipants to select the correct design name in the
presence of a modified version of the design
definition. Four questions required participants to
select the appropriate design name in the presence
of a novel graph (C-A relation). Four questions
required participants to select the design name in
the presence of a novel clinical vignette (D-A
relation). Three questions required participants to
select the appropriate definitional design feature
in the presence of the design name (A-B relation).
A sample quiz item that required the participant

Figure 1. B and D stimuli for each of the four
stimulus classes.

822 SADIE LOVETT et al.

to select the appropriate design name in the
presence of a novel graph is depicted in Figure 2.
The quiz was administered at the start of the
experiment as a pretest, and an identical quiz was
administered as a posttest after completion of the
stimulus equivalence protocol or presentation of
the lecture for the equivalence and lecture groups,
respectively. Content of quiz questions was
validated by a professor of behavior analysis with
expertise in single-subject design methodology.

Interobserver agreement. Interobserver agree-
ment on quiz scores was collected by an
independent reviewer. Agreement was calculat-
ed for 33% of pretests and 33% of posttests
for both the equivalence and lecture groups.
Interobserver agreement scores were calculated
by dividing the number of item-by-item
agreements by the total number of agreements
plus disagreements and multiplying that value
by 100%. Item-by-item agreement for both
pretests and posttests for the equivalence group
was 100%. Mean agreement for pretests for the
lecture group was 98% (range, 93% to 100%).
Item-by-item agreement for posttests for the
lecture group was 100%. Interobserver agree-
ment was not conducted on responding during
the stimulus equivalence protocol because all
procedures and data collection were automated.

Social validity survey. A questionnaire that
assessed participants’ opinions of the respective
instructional method was administered at the
end of the experiment. The survey included four
questions that participants rated on a 7-point
Likert-type scale, with higher ratings indicating a
more positive evaluation of the instructional
method. Questions inquired as to the partici-
pant’s confidence in his or her knowledge of
single-subject designs, the degree to which he or
she would prefer to be taught using the particular
instructional method, and the participant’s
opinions on the time commitment for instruc-
tion. The survey is shown in Table 1.

Lecture Group

Following the paper-and-pencil pretest, par-
ticipants were seated at desks and were told that
they were going to view a video on single-subject
designs. Up to two participants were present
to view the video during a session; however,
participants were seated at separate desks while
completing the paper-and-pencil test. The video
was presented on the computer and showed a
56-min lecture with an accompanying Power-
Point presentation that provided an overview of
the four basic single-subject designs. Video
content included an introduction to single-subject

Figure 2. Sample question testing a C-A relation from the paper-and-pencil quiz.

STIMULUS EQUIVALENCE PROTOCOL 823

design terminology (e.g. independent variables,
dependent variables, and functional relations) and
an overview of the basic components of a
graphical display (e.g. axes, phases, and data
paths). The majority of the lecture was divided
into four sections that corresponded to withdraw-
al, multiple baseline, alternating treatments, and
changing criterion designs. Each section provided
a definition of the design, showed a basic graphical
display of the design, and presented an example of
the application of the design in an applied setting
with an accompanying graph. Lecture content was
derived from the presentation of single-subject
design material in two undergraduate textbooks
(Kennedy, 2005; Richards, Taylor, Ramasamy, &
Richards, 1999). After the video, the paper-and-
pencil posttest was administered, followed by the
social validity survey.

The paper-and-pencil pretest and posttest as
well as the social validity survey were identical
to those that were used with the equivalence
group. These two measures were the only points
of comparison between the two groups.

Equivalence Group

Training for this group consisted of a computer-
based stimulus equivalence protocol programmed

using Microsoft Visual Basic 2008 Express
Edition. Training and test trials were presented
in a match-to-sample format with one sample
stimulus at the top of the screen and four
comparison stimuli at the bottom of the screen
on each trial. For example, on a trial that
examined the A1-B1 relation, the A1 stimulus
was presented at the top of the screen, and all four
B stimuli appeared at the bottom of the screen.
Trials in all training and testing phases were
presented in random order, and the order in
which the comparison stimuli were presented on
the screen was randomized. During training,
correct responses were followed by written and
auditory feedback in the form of the word
‘‘correct’’ and the chime sound from the
Windows operating system. Incorrect responses
were followed by written and auditory feedback in
the form of the word ‘‘incorrect’’ and the chord
sound from the Windows operating system. No
feedback was provided during testing. After com-
pletion of the paper-and-pencil pretest, partici-
pants were seated at the computer and read the
following instructions on the screen:

Thank you for participating in this experiment. Your
job during this experiment is to do the best that you
can at all times. One box will be presented at the top
of the screen, and four boxes will be presented below

Table 1

Social Validity Survey

How confident do you feel in your knowledge of single-subject designs?

1 2 3 4 5 6 7

Not at all confident Somewhat confident Very confident

Rate the degree to which you would prefer to be taught using this instructional method.

1 2 3 4 5 6 7

Don’t prefer at all Somewhat prefer Strongly prefer

How appropriate was the time commitment for this instructional method in relation to the amount you feel you have learned?

1 2 3 4 5 6 7

Not at all appropriate Somewhat appropriate Very appropriate

How do you feel about the length of this instructional method?

1 2 3 4 5 6 7

Not at all appropriate Somewhat appropriate Very appropriate

824 SADIE LOVETT et al.

it. Your job will be to choose one of the boxes at the
bottom of the screen. During certain portions of the
experiment you will receive feedback as to whether
your choice was correct or not, but during other
portions of the experiment you will not receive
feedback. Please do the best you can at all times
regardless of whether or not you receive feedback.
Click on the button below to start.

Pretest for equivalence and generalization relations.
This test consisted of 48 trials that evaluated
equivalence (definition-to-vignette [B-D] and
vignette-to-definition [D-B]) and generalization
(design-name-to-novel-graph [A-C9] and design-
name-to-novel-vignette [A-D9]) relations. There
were 12 trials for each type of relation (e.g., B-D),
and each individual relation (e.g., B1-D1) was
presented three times. No feedback was provided
following responses on this test.

Training and symmetry tests. Participants first
were trained on the design-name-to-definition
(A-B) relations. The design names appeared as
the sample stimuli, and the design definitions
appeared as comparison stimuli. A training
block consisted of 12 trials, with each individual
relation (e.g., A1-B1) presented three times.
Responses were followed by written and audi-
tory feedback. Participants repeated the training
phase until they met the criterion of 11 of 12
correct (92%). After meeting the criterion for
design-name-to-definition (A-B) training, par-
ticipants advanced to the definition-to-design-
name (B-A) symmetry test, which included 12
trials. No feedback was provided after responses
during this test, and the criterion was 11 of 12
correct. If the criterion was not met on the
definition-to-design-name (B-A) symmetry test,
the participant returned to design-name-to-
definition (A-B) training. When the criterion
in training again was reached, the participant
proceeded to the definition-to-design-name (B-
A) symmetry test, and this process was repeated
until he or she achieved the criterion on the test.

After passing the definition-to-design-name (B-
A) symmetry test, training began on the design-
name-to-graph (A-C) relations. The design names
appeared as the sample stimuli, and graphs
appeared as comparison stimuli. Training was

conducted in the same manner as for the design-
name-to-definition (A-B) relations, with a training
block containing 12 trials and each relation (e.g.,
A1-C1) presented three times. After meeting the
criterion of 11 of 12 trials correct, participants
advanced to the graph-to-design-name (C-A)
symmetry test. This test was similar to the
definition-to-design-name (B-A) symmetry test
and included 12 trials, with a mastery criterion
of 11 of 12 correct. If the criterion was not
attained on the graph-to-design-name (C-A)
symmetry test, design-name-to-graph (A-C) train-
ing was repeated, and the graph-to-design-name
(C-A) symmetry test again was administered until
mastery was achieved.

Following the graph-to-design-name (C-A)
symmetry test, the design-name-to-vignette (A-
D) relations were trained in the same manner as
the design-name-to-definition (A-B) relations.
The design name was presented as the sample
stimulus, and the clinical vignettes appeared as
comparison stimuli. A training block contained
12 trials with each relation (e.g., A1-D1)
presented three times, and the criterion for
advancing was 11 of 12 correct. After achieving
mastery on the design-name-to-vignette (A-D)
relations, the vignette-to-design-name (D-A)
symmetry test was presented in the same
manner as the definition-to-design-name (B-
A) symmetry test. The test included 12 trials
with a mastery criterion of 11 of 12 correct. If
criterion was not achieved, design-name-to-
vignette (A-D) training was repeated, and the
vignette-to-design-name (D-A) symmetry test
was presented again until mastery was attained.

Mixed symmetry test. This test included 36
trials that evaluated the definition-to-design-
name (B-A), graph-to-design-name (C-A), and
vignette-to-design-name (D-A) symmetry rela-
tions. As in the previous tests, each relation (e.g.,
B1-A1) was presented three times. Trials were
presented in random order, and the mastery
criterion was 33 of 36 correct (91.7%). No
feedback was provided following responses.
Participants continued to the next test phase
regardless of performance.

STIMULUS EQUIVALENCE PROTOCOL 825

Transitivity test. This test consisted of 48
trials that evaluated the definition-to-graph
(B-C), graph-to-definition (C-B), graph-to-
vignette (C-D), and vignette-to-graph (D-C)
transitive relations. Each relation (e.g., B1-C1)
was presented three times, and the criterion was
44 of 48 trials correct (91.7%). Participants
continued to the next test after completion of all
48 trials regardless of performance.

Equivalence test. This test included 24 trials
that evaluated the definition-to-vignette (B-D)
and vignette-to-definition (D-B) equivalence
relations. Each relation (e.g., B1-D1) was pre-
sented three times, and the criterion was 22 of 24
trials correct (91.7%). Participants continued to
the next test regardless of performance.

Generalization test. The first test for general-
ization included 12 trials that evaluated the
design-name-to-novel-graph (A-C9) relations.
The design name was presented as the sample
stimulus, and the novel graphs appeared as
comparisons. Each relation (e.g., A1-C91) was
presented three times, and the criterion was 11
of 12 trials correct (91.7%). No feedback was
provided following responses, and participants
continued to the subsequent generalization test
regardless of performance.

The second test for generalization evaluated the
design-name-to-novel-vignette (A-D9) relations
and was presented in the same manner as the test
for design-name-to-novel-graph (A-C9) relations.
The design name appeared as the sample stimulus,
and the novel vignettes were presented as
comparisons. This test included 12 trials with
each relation (e.g., A1-D91) presented three times,
and no feedback was provided after responses.
Following completion of the design-name-to-
novel-vignette (A-D9) generalization test, the
computer program ended, and the paper-and-
pencil posttest was administered for 11 of the 15
participants in this group. The remaining four
participants continued to the tact test.

Tact test. The tact test was conducted in two
blocks using flash cards presented by the
experimenter. Before beginning the tact test,

the experimenter read the following instructions
to the participant: ‘‘I’m going to show you some
cards, and I’d like you to tell me which
experimental design the picture or description
on the card represents. I won’t tell you whether
your responses are correct or incorrect, but try
your best.’’ An individual trial consisted of
the experimenter showing the participant one
card and asking, ‘‘What design is this?’’ The
participant was allowed 10 s to review the
stimulus and respond. No feedback was
provided after responses, and if the participant
failed to tact the stimulus within 10 s, the next
trial was presented.

The first tact-test block included 24 trials
that evaluated the graph-to-design-name (C-A)
and vignette-to-design-name (D-A) relations.
The graph (C) and vignette (D) stimuli used
during training on the stimulus equivalence
protocol were individually presented on cards
(7 cm by 10 cm). Trials were presented in a
predetermined random sequence, and each
relation (e.g., C1-A1) was presented three
times. Criterion for mastery was 22 of 24
correct (91.7%), and the second test block
immediately followed regardless of perfor-
mance. The second tact-test block consisted of
24 trials that tested the novel-graph-to-design-
name (C9-A) and novel-vignette-to-design-
name (D9-A) generalization relations, and it
was conducted in the same manner as the tact
test for the trained stimuli. The novel graph
(C9) and novel clinical vignette (D9) stimuli
were presented on flash cards. Criterion for
mastery was 22 of 24 correct (91.7%). Regard-
less of performance, the paper-and-pencil post-
test was administered after the tact test.

Interobserver agreement on the tact test was
collected by an independent observer for 50%
of sessions. Interobserver agreement scores were
calculated by dividing the number of trial-
by-trial agreements by the total number of
agreements plus disagreements and multiplying
that value by 100%. Trial-by-trial agreement
was 100% for all sessions.

826 SADIE LOVETT et al.

RESULTS

Session Length

Participants in the equivalence group spent
an average of 85 min completing the stimulus
equivalence protocol (range, 62 to 120 min).
All participants in the lecture group spent
56 min in the instructional portion of the
experiment viewing the recorded lecture.

Paper-and-Pencil Evaluation

Results for the paper-and-pencil quiz for
both the equivalence and lecture groups are
presented in Table 2. For the equivalence
group, mean quiz scores …

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH