SOCW 6311 wk 6 assignment: Developing a Program Evaluation

Program Evaluation
Studies

TK Logan and David Royse

A
variety of programs have been developed to address social problems such
as drug addiction, homelessness, child abuse, domestic violence, illiteracy,
and poverty. The goals of these programs may include directly addressing
the problem origin or moderating the effects of these problems on indi-
viduals, families, and communities. Sometimes programs are developed

to prevent something from happening such as drug use, sexual assault, or crime.
These kinds of problems and programs to help people are often what allracts many

social workers to the profession; we want to be part of the mechanism through which
society provides assistance to those most in need. Despite low wages, bureaucratic red
tape, and routinely uncooperative clients, we tirelessly provide services tha t are invaluable
but also at various Limes may be or become insufficient or inappropriate. But without
conducting eva luation, we do not know whether our programs are helping or hurting,
that is, whether they only postpone the hunt for real solutions or truly construct new
futures for our clients. This chapter provides an overview of program evaluation in gen –
eral and outlines the primary considerations in designing program evaluations.

Evaluation can be done informally or formally. We are constantly, as consumers, infor-
mally evaluating products, services, and in formation. For example, we may choose not to
return to a store or an agency again if we did not evaluate the experience as pleasant.
Similarl y, we may mentally take note of unsolicited comments or anecdotes from clients and
draw conclusions about a program. Anecdotal and informal approaches such as these gen-
erally are not regarded as carrying scientific credibility. One reason is that decision biases
play a role in our “informal” evaluation. Specifically, vivid memories or strongly negative or
positive anecdotes will be overrepresented in our summaries of how things are evaluated.
This is why objective data are necessary to truly understand what is or is not working.

By contrast, formal evaluations systematically examine data from and about programs
and their outcomes so that better decisions can be made about the interventions designed
to address the related social problem. Thus, program evaluation involves the usc of social
research meLhodologies to appraise and improve the ways in which human services, poli-
ci~s, and programs are co nducted. Formal eva l.uation, by its very nature, is applied research.

Formal program evaluations attempt to answer the following general ques tion: Does
the p rogram work? Program evaluation may also address questions such as the following:
Do our clients get better? How does our success rate compare to those of other programs
or agencies? Can the same level of success be obtained through less expensive means?

221

222 PART II • QUANTITATIVE A PPROACHES: TYPES OF STUD IES

What is the experience o f the typical client? Sho uld this prog ram be terminated and its
funds applied elsewhere?

Ideally, a tho rough program eval uation would address more complex questions in
three main areas: (1) Does the program produce the intended outcomes and avoid unin-
tended negative o u tcomes? (2) For whom does the program work best and un der what
conditions? and (3) Ilow well was a p rogram model developed in one setting adapted to
another setti ng?

Evaluation has taken an especially p rominent role in practi.ce today because o f the focu~
on evidence-based practice in social programs. Social work, as a profession, has been asked
to use evidence-based practice as an ethical obligation (Kessler, Gira, & Poertner, 2005).
Evidence-based practice is defined diLTerently, but most definit ions include using program
evaluation data to help determine best practices in whatever area of social programming is
being considered. In other words, evidence-based practice incl udes using objective indica-
tors of success in addition to p ractice or more subjective indicators of success.

Formal program evaluations can be found on just about every topic. For instance,
Fraser, Nelson, and Rivn rd ( 1997) h ave examined th e effectiveness of family preservation
services; Kirby, Korpi, Adivi, and Weissman ( 1997) have evalu ated an AIDS and preg-
nancy prevention middle school program. Mo rrow- Howell, Beeker-Kemppainen, and
Judy ( 1998) evaluated an interven tion designed to reduce the risk of suicide in elderl y
adult clients of a crisis hotline. Richter, Snider, and Gorey ( 1997) used a quasi-experimental
design to study the effects of a g roup work interven tio n on female sur vivors of childho od
sexual abuse. Leukefeld and colleagues ( 1998) examined the effects of an I IlV prevention
intervention with injecting drug and crack users. Logan and colleagues (2004) examin ed
the effects of a drug co urt in terven tion as well as the costs of drug co urt compared with
t he economic benefits of the drug court progra m.

Basic Evaluation Considerations

Before beginning a program eva luntion, several issues must be initially considered. These
issues are decisions 1 hat are critical in determining the evaluation methodology and goals.
Although you may not have complete answers to th ese qu estions when beginning to plan
a n evaluation, these ques tion s help in developing th e plan and must be answered before
a n evaluation ca n be carried out. We can 1.um up these considerations with the following
questions: who, what, where, when, and why.

First, who will do the evaluation? This seems like a simple question at first glance.
llowever, this particular consideration has major implications for the evaluation results.
P rogram evaluators ca n be categorized as being either in ternal or external. An internal
evaluator is someone who is a program staff member or regular agency employee, whereas
an external evaluator is a professional, on contract, hired for the specific purpose of evalu-
a tion. Th ere are adva ntages nnd disa dvan tages to using either type of evaluato r. For
example, the internal evaluator probably will be very familia r with the staff and the
program . This may save a lot of planning time. The d isadvnn tage is that eva luatio ns com-
pleted by an internal eva luator may be considered less valid by outside agencies, including
the funding source. The external evaluator gene rally is thought to be less biased in terms of
evaluation outcomes beca use he or she has no persona l investment in the program. One
disadvantage is that an externa l evaluator frequently is viewed as an “o utsider” by the staff
w ithin an agency. This may affect the amount of time necessar)’ to conduct the eva lua tion
or cause problems in the overall evaluation if agency staff are reluctant to cooperate.

CHAPTER 13 • P ROGRAM E VALUATION S1 UD I ES 223

Second, what resources are available to conduct the evaluation? Hiring an outside eval-
uator ca n be expensive, whi le having a staff person conduct the evaluation m ay be less
expensive. So, in a sense, you may be trading credibility for less cost. In fact, each method-
ological decision will have a trade-off in credibility, level of information, and resources
(including time and mo ney). Also, t he amount and level of infor mation as well as the
research design .. ciU be determined, to some e11.”1ent, by what resources are available. A
comprehensive and rigorous eval uation does take significant resources.

Third, where will the information come from? If an eval uation can be done using exist-
ing data, the cost will be lower than if data must be collected from numerous people such
as clien ts and/or staff across m ultiple sites. So having some sense of where the data will
come from is important.

Fou rth, when is the evaluation information needed? In o ther wo rds, what is the time-
fra me for the evaluation? The timeframe will affect costs and design of research methods.

Fifth, why is the evaluation being conducted? Is the evaluation being conducted at the
request of th e fun ding so urce? Is it being cond ucted to improve services? Is it being con-
ducted to document the cost-benefit trade-off of the program? If future program funding
decisions will depend on the results of the evaluation, then a lot more importance will be
attache d to it than 1f a new manager simply wants to know whether clients were satisfied
with services. The more that is riding on an evaluation, the more attention will be given
to the methodology and the more threa tened staff ca n be, especially if they think that th e
purp ose of the evaluation is to down size and trim excess employees. In other words, there
arc many reasons an evaluation is being considered, and these reasons may have implica-
tions for the evaluati on methodology and implemen tation.

Once the issues described above have been considered, more complex questions and
trade-offs will be needed in planning the evaluation. Specifically, six ma in issues guide
and shape the design of any program evaluation effort and m ust be given thoughtful and
delib erate consideration.

L Defining the goal of the program evaluation

2. Un dersta ndi ng the level of infor mation needed for the program evaluation

3. Determining the methods and analysis that need to be used for the program evaluation

4. Consider in g issues that might a ri se and strategies to keep the eval uation on course

5. Developing results into a useful fo rm at for the program stakeholders

6. Providing practical and useful feedback about the program strengths and weak-
nesses as well as providing infor matio n about next steps

Defining the Goal of the Program Evaluation

It is essen tial that the evaluator has a firm understanding of the short- and long-term
objectives of the evaluation. Imagine being hired for a position but not being given a job
descrip tio n or informed aboul how the job fits into the overall organization. Without
knowing why an evaluation is called for or needed, the evaluator might attempt to answer
a d ifferent set of c.1uestio ns from those of interest to the age ncy director or advisory board.
The management might want Lo know why the majo rity of clients do not return after one
or two visits, whereas the evaluator might think that his or her task is to determ ine

224 PART II • QUANTITATIVF APPROACHES: TYPlS Or SIUDIES

whether clien ts who received group therapy sessions were better off than cl ien ts who
received ind ividua l counseling.

In defini ng the goals of t he prog ram evaluation, severa l steps should be taken. First, the
program goals should be examined. These can be lea rned through examining official
program docum ents as well as through talking to key program stakeholders. In clarifying
the overall purpose of the evaluation, it is critical to talk with different program “stake-
holders.” Scriven ( 199 1) defines a program stakeholder as “one who has a substantial ego,
credibility, power, futures, or other capital invested in the program . . .. This includes
program staff and many who arc no t ac tively invo lved in the day-to-day operations”
(p. 334) . Stakeholders incl ude both supporters and opponents of the program as well as
program clients or consumers or even potential consumers or clients. lt is essential that
the evaluator obtain a variety of different views about the program. By listening and con-
sidering stakeholder perspectives, the evaluator can ascertain the most important aspects
of the program to target for the evaluation by looking for overlapping concerns, ques-
tions, and comments from the various stakeholders. However, it is important th at the
stakehol ders have so me agreement on what program success means. Otherw ise, it may be
d ifficult to conduct a satisfactory evalua tio n.

It is also important to consult the extant literature to understand what similar
programs have used to evaluate their outcomes as well as to understand the theoretical
basis of the program in defining the program evaluation goals. Furthermore, it is critical
that the evaluator works closely with whoever initia ted the evaluation to set priorities for
the evaluation. This process should identify the intended o utcomes of th e program an d
which of those outco mes, if not all of them, will be evaluated. Takin g the eval uation a step
further, it may be important to include the exam ination of un intended negative outcomes
that may result from the program. Stakeholders and the literature will also help to deter-
mine those kinds of outcomes.

Once the overall purpose and priorities of the evaluation a re established, it is a good
idea to develop a written agreement, especially if the eva I uator is an external one.
Misunderstandings can and will occu r m onths later if things are no t wr itten in black
and white.

Understanding the Level of Information
Needed for the Program Evaluation

The success of the program evaluation revolves around the evaluator’s ability to develop
practical, researchable questions. A good rule to follow is to focus the evaluation on one
or two key questions. Too many questions can lengthen the process and overwhelm the
evaluator with too much data that, instead of facilitating a decision, might produce
inconsistent findings. Sometimes, funding sources require only that some vague unde-
fined type of evaluation is conducted. The funding sources m ight nei ther expect nor
desire disserta tio n-quality researc h; they simply migh L expect “good fa ith” efforts when
beginning eva luation processes. Other agencies may be quite demand ing in the types and
forms of data to be provided. Obviously, the choice of methodology, data collection
procedures, and reporting formats will be strongly affected by the purpose, objectives,
and questions exam ined in the study.

It is important to note the difference between general research and evaluation. In
resea rch, th e investigator often· focuses on q uestions based on theoretical considerations
o r hypotheses gene rated to hu ilcl o n research in a specific area of study. Altho ugh

CHAPTER 13 • PROGRAM EVALUATION $ TUU I ES 22 5

prog ram evaluatio ns m ay foc us on an intervention derived from a theory, the evalua-
tio n questions should, first and foremost, be driven by the program’s objectives. The eval-
uator is less con ce rned with buildi ng o n prior litera ture o r cont ributing to the
development of practice theory than with determinin g whether a program worked in a
specific community or location.

T here are actually two main types of evalu ation questi ons. There are quc~>tions that
focus on client outcomes, such as, “What impact did the program have?” Th ese kinds of
questions are addressed by using outcome evaluation methods. Then there are questions
that ask, “Did the program achieve its goals?” “Did the program ad here to the spec ified
procedures or standards?” o r “vVh at was learned in operating this program?” These kinds
of questions are addressed by using process evaluation methods. We will examine both of
these two types o f evaluation approaches in the following sec tions.

Process Evaluation
Process evaluations offer a “snapshot” of the program at any given time. Process evalua-
tions typically describe the day-to- day program effo rts; program modifica tions and
changes; outs ide even ts that infl uenced the program; people and institutions involved;
culture, customs, and traditions that evolved; and sociodemographic makeup of the clien-
tele (Scarpitti, In ciardi, & Pottieger, 1993). P rocess evaluation is conce rned with identify-
ing p rogra m st rengths and weaknesses. T his level of p rogram cvalua rion can be usefuhn
several ways, including providing a contex-t within wh ich to interpret program outcomes
and so that other agen ci es o r localities wishin g to sta rr sim ilar programs ca n benefit with-
out havin g to make the same mistakes.

As an example, Bentelspacher, DeSilva, Goh, and La Rowe ( 1996) conducted a process
eva luation o f the cultural co mpatibility of psychoed ucational fam ily grou p treatment
with eth n ic Asian cl ients. As another example, Logan, Williams, Leukefeld, an d Minton
(2000) conducted a detailed process evaluation of the drug court programs before under-
taking an outcome evalual ion of the same programs. T he Loga n et al. sl udy used multiple
m ethods to condu ct the process evaluati o n, including .in-depth i nterviews with the
program administra tive personnel, inten,iews with each of five judges involved in the
progr am, surveys a nd face- to -face interviews with 22 randomly selected current clients,
and surveys of all program staff, 19 community treatment provider representatives, 6 ran –
domly selected d efense attorney representatives, 4 prosecu tin g attorney representatives, l
representative 6:om the probation and parole offi ce, 1 representa tive from the local
co unty jail, an d 2 police depa rtmen l representatives. In all, 69 different individuals repre-
senting I 0 different agency perspectives provided information about the drug court
program. Also, all agency documents were ex amined and analyzed , observations of vari-
ous aspects of the program process were conducted, and client intake data were analyzed
as pa rt of the process evaluation. The results were all integrated an d compiled into one
co mprehensive repo r t.

What makes a process evaluation so important is that resea rchers often have relied only
on selected program outcome indicators such as termination and grad uation rates or
number of rearrests to determine effectiveness. However, to better understand how an d
why a program such as drug court is effective, an analysis of how the p rogram was con cep-
tualized, implemented, and revised is needed. Consider this exan1ple-say one outcome
eva luation of a drug cou rt p rogram showed a gra duat ion rate of 80% of those who began
the program, while another outcome evaluation found that only 40o/o of those who began
the program graduated. Then, the graduates of the second program were more likely to be
free from substance usc an d crimin al behaviors at the l2- month foUow-up than the graduates

226 PART II • QuANTITATIVE APPROACHES: TYPES OJ SJUDIES

from the first program. A process evaluation could help to explain the specific differences
in facto rs such as selection (how clients get into the programs), treatment plans, monitor-
ing, program length, and other program features that may influence how many people
graduate and slay free from drugs and criminal behavior at follow-up. Tn other words, a
process evaluation, in contrast to an examina tion of program outcome only, can provide a
clearer and more com prehensive pictm e of how drug cou rt affects those involved in the
program. More specifically, a process evaluation can provide information about program
aspects that need to be improved and those that work well (Scarpilli, Inciard i, & Pottieger,
1993). Finally, a process evaluation m ay help to facilita te replicatio n of the drug cou rt
program in other areas. This often is referred to as technology transfer.

A different but related process evaluation goal might be a description of the failures
and depa r tures from the way in which the interventio n o riginally was designed. How were
the staff trained and hired? Did the intervention depart from the treatment manual rec-
ommendations? Influences that shape and affect the intervention that clients receive need
to be identified because they affect the fidelity of the treatment p rogram (e.g., delayed
funding or staff hires, ch anges in policies or procedu res). “/hen program implementation

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH