Building Consensus Through Collaboration

Berner and Bronson / PROGRAM EVALUATION IN LOCAL GOVERNMENT 309

Public Performance & Management Review, Vol. 28 No. 3, March 2005, pp. 309–325.
© 2005 M.E. Sharpe, Inc. All rights reserved.
1530-9576/2004 $9.50 + 0.00. 309

A CASE STUDY OF PROGRAM
EVALUATION IN LOCAL GOVERNMENT

Building Consensus Through Collaboration

MAUREEN BERNER
University of Northern Iowa

MATT BRONSON
Marin County Administrator’s Office, San Rafael, California

ABSTRACT: This article explores the potential of the collaborative approach
for local government program evaluation, particularly programs administered
by nonprofits. City budget staff in Charlotte, North Carolina, partnered with the
nonprofit in charge of a special tax district in a collaborative evaluation (CE) to
assess the district’s impact. An examination of this experience suggests that CEs
are costly in terms of time and staff resources and raises questions about cooption
and bias. However, it also suggests the promise of increased communication and
better relations between local governments and nonprofits, improved evaluation
skills of program staff, and increased likelihood of utilization of evaluation results.

KEYWORDS: accountability, collaborative evaluation, local government program
evaluation

Local governments are continually faced with accountability demands from
their governing boards and citizens, especially in times of fiscal stress. To help
meet these demands, some larger units have given the budget office evaluation
responsibilities for local government programs. To do this, the analyst tradition-
ally requests certain information, the program personnel produce it, and there is
little additional communication between the two offices until judgment is ren-
dered in the form of budget increases or cuts. One could argue that this process is
a form of evaluation, albeit a cursory one. This article highlights the experience
of Charlotte, North Carolina, in using an alternative approach—collaborative
evaluation (CE). Based on collaboration between the evaluator and program per-
sonnel, its purported value is in changing the nature of evaluation from adversarial
to partnership-based. Judgment is rendered in both cases, but based on the expe-

310 PPMR / March 2005

rience with this case, the collaborative approach presents certain advantages and
disadvantages over a traditional approach.

A collaborative approach to evaluation is not new, but it does not appear to be
commonly practiced in governmental settings, especially in evaluating nonprofit
programs. The evaluation highlighted here is of a program funded by a city gov-
ernment but administered by a local nonprofit. As local governments depend
more on nonprofits, it is important that the governments have systems in place
that evaluate the effectiveness and efficiency of services provided to the public
because ultimately they will be held responsible for the services the nonprofits
provide (Sawhill & Williamson, 2001).

We first briefly describe general evaluation processes in local government and
contrast them with a collaborative approach. We then illustrate CE with a case
study of the South End Evaluation conducted in 2002 by the City of Charlotte.
Material for this case study was obtained through literature review, interviews
with participants, and the direct experience of one of the authors. The analysis
was reviewed by eight local government management, nonprofit management,
and evaluation scholars and practitioners.

Evaluation in Local Government

Program evaluation can be a vague term. Although there is no consensus on a
standard academic definition, it can be generally thought of as a means of pro-
viding valid findings about the effectiveness of programs to those persons with
responsibilities or interests related to the program’s creation, continuation, or
improvement (Rossi & Freeman, 1989). Evaluations can be simple or complex,
short or lengthy, cursory or in-depth.

Regardless of the form, evaluations typically are done in five main steps. Al-
though presented in simple fashion here, each step has many additional layers
within it:

• Agree on and articulate the program goals and objectives;
• Agree on and declare the program theory or theory of change;
• Specify and agree on the criteria that will be used to measure success and the

standards that must be met;
• Gather data according to the criteria to see if the standards have been met; and
• Interpret the data and present results in a meaningful and useful way.

The traditional approach to evaluation in a local government primarily in-
volves just the requesting agency and the evaluator. When other stakeholders are
consulted (if they are), it is usually early and late in the process (Cousins & Earl,
1995). In most cases, there is a clear requesting agency. For example, the manager’s
office, the finance department, or the budget office may be interested in under-
standing the value of a program or a project, either for its own purposes or to

Berner and Bronson / PROGRAM EVALUATION IN LOCAL GOVERNMENT 311

satisfy a request of the governing board. Staff from one of these offices—an
evaluator—contacts a representative of the program in question and asks for in-
formation on the success of the program (e.g., a quarterly or end-of-the-year
report). If the program is internal to local government, the information may come
in the form of additional budget justification documents as part of the budget
request process.

Except in larger units, local government offices usually have inadequate time
and personnel available to conduct in-depth or lengthy studies. For example, in
North Carolina, Coe (2003) found that relatively few local units have budget
offices, and few of those have active work plans for the analyst. In fact, lack of
resources is a common barrier to quality evaluations (or any, in fact) (McNeils &
Bickel, 1996). Anecdotal evidence suggests that the atmosphere surrounding lo-
cal government evaluations can be tense, creating anxiety among program staff
about the motive behind the request for detailed information (Is my budget going
to be cut? Is this program targeted for downsizing?) and fostering companion
suspicions by management staff (Is the program providing valid information? Is
staff hiding something that might make them look bad?).

The tension may intensify if the expectations for a program’s evaluation have
not been explicit from the beginning of the program, if the criteria for program
evaluation have changed, or if the evaluators do not communicate fully with pro-
gram staff. An adversarial atmosphere is not unusual in external evaluations (Usher,
1995). The adversarial atmosphere can extend to evaluations of community or
nonprofit organizations receiving funding from local governments. City or county
staff may have standard reporting requirements for such organizations, but at
times may require more substantive reports.

As with internal evaluations, beyond providing data, local government evalu-
ations or reviews of nonprofits generally do not have substantial involvement
from the organizations under review. Further, it is not clear whether they accu-
rately reflect the value of external programs. If financial information is the bulk
of the information provided, important aspects of the value of the organization
can be missed, such as effectiveness, community support, staff quality, and so on.
Kaplan (2001) suggests this in his argument for the use of a balanced scorecard
approach to measuring and managing organizational performance. However, in
his examples, the use of the scorecard is an internal process, and whether clients,
funders, or others were included in the process of evaluating progress toward its
mission was unclear.

The lack of involvement on the nonprofit’s or the community organization’s
part can be perceived as appropriate, thus ensuring an objective analysis. How-
ever, it also can lead to a lack of ownership of the resulting recommendations,
which can ultimately impede implementation, frustrating both nonprofit and the
funding government. Further, it does not encourage a sense of partnership in

312 PPMR / March 2005

solving community problems. A nonprofit expert interviewed for this research
felt that the value of traditional evaluations was low, stating, “At best the hierar-
chy might reward; at worst it will punish.”

A Collaborative Approach

The controversial alternative is to involve the agency or program being evaluated
in the evaluation itself in a partnering approach. This often is referred to as “par-
ticipatory evaluation” or “collaborative evaluation.” Although some may use these
two terms interchangeably, we prefer the term “collaborative evaluation” because
it places a stronger emphasis on partnership than the term “participatory.” In CE,
representatives of a majority of stakeholders or all stakeholders—program staff,
affected citizens, politicians, and interest groups–—are involved in the five steps
mentioned earlier, not just the requesting agency and the evaluator. Responsibil-
ity for completing the task is shared in various degrees. As discussed in the fol-
lowing, CEs purport to bring the positives of human interaction to the evaluation
process, and it is controversial because by doing so, it opens evaluation to more
subjectivity and threat of bias. Sharing control inherently means losing some
control.

CE developed in the late 1960s and early 1970s, as evaluations were criticized
as mechanistic and detached (Worthen, Sanders, & Fitzpatrick, 1997). The hu-
man perspective was missing, especially in evaluations of education and human
service organizations. Critics called for more direct interaction on behalf of the
evaluator, greater use of qualitative research methods, and eventually, the signifi-
cant involvement of those being evaluated. In general, in a CE, the evaluator
plays the role of partner and participant in the process rather than an outside
expert; the organization’s staff, clients, board members, and sometimes even in-
terested community members have input in deciding whether to evaluate, what to
evaluate, how to draw conclusions, when to disseminate findings, and how and
when to implement recommendations (Worthern et al., 1997; Upshur & Barreto-
Cortez, 1995). The evaluation process in terms of the technical information gath-
ered or the analysis performed may not be different from a traditional evaluation.
The difference lies in personal relationships between the evaluator and the stake-
holders of the program. The defining feature of CEs is that stakeholders share a
significant degree of power (Mathie & Greene, 1997).

The importance of CEs is clear in the growing emphasis on the theory of
change approach to evaluations, in which the program theory and the assump-
tions underlying it are clearly articulated by the stakeholders, and the evaluation
focuses on testing those links. The theory of change idea grew from the work of
many evaluation theorists, but is most commonly attributed to Carol Weiss, who
applied the idea to evaluation of complex community/social change initiatives

Berner and Bronson / PROGRAM EVALUATION IN LOCAL GOVERNMENT 313

(Aspen Institute Roundtable, 1995). A theory of change tracks both the set of
assumptions that underlie the ministeps that led to the long-term program goal of
interest and the program activities and outcomes that occur at each step of the
way. Mapping out the theory of change requires extensive interaction with stake-
holders. Indeed, an evaluation, or the program itself, may be doomed if stakehold-
ers unknowingly hold different theories of change. The potential value in an
evaluator’s working with stakeholders is being able to better identify who holds
what assumptions. The theory of change approach requires a shift in the role of
the evaluator: By working with stakeholders to identify the theory of change, the
evaluator moves from being an outside appraiser to being a collaborator (Brown,
1998).

Support for increasing collaboration in evaluations is based in the evaluator’s
philosophical approach to evaluation—a democratic, communitarian approach.
Some argue it is an approach that will lead to organizational transformation, greater
citizen deliberation, and redefined power relationships (Caracelli, 2000; Cousins
& Earl, 1995; MacNeil, 2002; Mathie & Greene, 1997). Most CE practices do
not have such lofty goals, but focus instead on simply improving the quality of
the evaluation and increasing the likelihood that the results are useful for all
involved (Caracelli, 2000). A substantive body of evaluation literature argues
that involvement of stakeholders in the process will improve evaluation utiliza-
tion (e.g., see Fetterman, 1996; Greene, 1988). For example, dialogue between
program staff and evaluation teams seems to facilitate learning (Mausolff &
Spence, 2002). Some studies have shown that participants derive a strong sense
of satisfaction and professional development with the process (Cousins & Earl,
1995). A recent study (Turnbull, 1999) confirmed the theoretical links between
this type of evaluation climate, the perception of efficacy, and use of evaluation
results in both symbolic and instrumental ways. For example, O’Sullivan and
D’Agostino (2002) recently found that collaboration improved the quality of the
evaluation itself in a study of a countywide comprehensive early childhood edu-
cation initiative.

A key feature of CEs is the emphasis on the program staff ’s learning to collect
and report data themselves, thus becoming increasingly enabled to self-evaluate
(O’Sullivan & O’Sullivan, 1998). In a similar vein, the Independent Sector, a
nonprofit coalition of more than 100 organizations with an interest in philan-
thropy and volunteerism, recently sponsored a book calling for “co-evaluation.”
It emphasizes empowering the stakeholders to evaluate their own programs and
organizations on an ongoing basis (Gray, 1998).

However, involving participants in the evaluation process has obvious draw-
backs. The most common charge against CEs is they are unpredictable and carry
the danger of cooption (Upshur & Barreto-Cortez, 1995; Worthern et al., 1997).
Although CE is gaining in popularity, there is still a strong concern expressed in

314 PPMR / March 2005

the evaluation community about the potential threat to objectivity. In addition,
organizational support is vital, and the process time consuming. There is also the
question of the appropriate role for the outside professional. Evaluators can be
too close to the process, creating unrealistic expectations, but they need to be
close enough to still be the primary resource for technical analytic work (Cousins
& Earl, 1995). CEs of multisite programs can be especially challenging. If each
site’s staff conceives of the program theory in a way that reflects the context of
each site, the evaluations can lose comparability (Petersen, 2002).

Some local governments have adopted or are exploring CEs. Evaluations in
Greensboro, a major urban area in central North Carolina, are examples, although
they were developed without an eye to any formal model. In Greensboro, the
internal evaluation function is housed with budget functions in the Budget and
Evaluation Department, which conducts several management studies each year.
Past studies from this office include evaluations of the City’s Park and Recreation
Department’s drama program, its stormwater services, and its loose-leaf collec-
tions program (texts of the completed evaluations are available at www.ci.greensboro
.nc.us/budget/mgmtstud/mgmtstud.htm).

According to Vicki Craft, a budget and management analyst, Greensboro’s
approach has been to work with departments as partners in evaluations. Although
the Manager’s Office or the City Council may request that an evaluation be done,
the Budget and Evaluation Department also takes requests for evaluations from
departments themselves. These departments see the Budget and Evaluation De-
partment as a valuable resource for helping them identify ways to solve problems
or improve operations. Staff of the Budget and Evaluation Department and repre-
sentatives of the department or program being evaluated make up evaluation teams.
Together they define and agree on a detailed plan of action, or written “contract.”
In the contract, the evaluation team tries to clearly identify the evaluation objec-
tive. Evaluation staff and the program staff sign the contract. This process helps
the evaluators and program staff define what information to gather and how to
use it. Although such a partnership does not always protect the process from
politics, it does appear to have turned the traditional view of evaluation staff from
potential adversaries to valuable resources, according to Craft.

To explore how a CE would work in detail, we present a case study of an
evaluation of a major community initiative from Charlotte, North Carolina. We
then examine the process in the context of the literature outlined above.

The South End Evaluation in Charlotte

Like most cities, Charlotte works closely with many nonprofit organizations to
provide services. These nonprofits, called “financial partners” by the City, range
from small neighborhood improvement groups to the Convention and Visitors

Berner and Bronson / PROGRAM EVALUATION IN LOCAL GOVERNMENT 315

Bureau and receive millions of dollars annually in city funding. Like Greens-
boro, Charlotte has used a collaborative approach in conducting several internal
evaluations, such as evaluations of street maintenance operations and the imple-
mentation of certain capital projects. The city recently took this approach one
step further by conducting a CE of the performance of an external nonprofit
agency called Historic South End (HSE).

BACKGROUND ON HISTORIC SOUTH END

The South End, a historic industrial district adjacent to downtown Charlotte, has
experienced dramatic urban revitalization in the past 10 years. In response, in
1995, business leaders in the area formed the South End Development Corpora-
tion to further promote economic development. In early 2000, the corporation
petitioned the Charlotte City Council to establish a special tax assessment of $.09
cents per $100 valuation on all properties in the district. Levied in addition to the
city and county tax rates, this assessment was expected to generate $185,000 per
year initially. The request was approved in May 2000, along with a formal contract
to ensure that these dedicated tax revenues funded a defined list of initiatives in
four areas: physical improvements, public safety, marketing and commerce, and
support for a vintage trolley service. The development corporation reorganized as
HSE and hired an executive director to begin implementation of these initiatives.

At the same time, the City Council charged city staff to conduct a review of
HSE services within two years to ensure that dedicated funds were appropriately
spent. This charge was in response to concerns from some council members and
affected property owners that HSE could not adequately provide the services
supported by the special tax. Several property owners from one area of the dis-
trict hoped that the two-year review would provide the justification to request
formally that the City Council discontinue the special assessment. Because the
statutes governing these tax districts do not allow for sunset clauses, a formal
renewal was not required. The City Council can establish and abolish these dis-
tricts at any time, which is what some opponents were hoping for after the two-
year review. The City Council, therefore, addressed these concerns by approving
the district but guaranteeing its review in a short time.

The review was thus a pivotal point in the future of the South End tax district
and HSE as an organization. Because of the potentially controversial nature of
the situation, previous conversations with Greensboro evaluation staff about their
partnership approach, and Charlotte staff’s willingness to experiment, the city’s
Budget and Evaluation Office initiated a CE. Staff started by carefully selecting
an eight-person review team consisting of both city staff and HSE representa-
tives. City staff representing the Economic Development Office, the Planning
Commission, and the Budget and Evaluation Office were selected for their knowl-
edge of the community and their experience with projects there. HSE representa-

316 PPMR / March 2005

tives included the executive director, the board president, and three members of
the board. To ensure a variety of viewpoints, two of the three members from the
board were or represented property owners openly skeptical of this district. Staff
felt it was important to include these individuals at the very beginning of the
process so that all stakeholders would be engaged in the process and agree with
the final recommendation. They also felt such broad involvement would increase
the real and perceived legitimacy of the process.

THE EVALUATION PROCESS

The review team was convened in September 2001. A critical first step for the
team was to reach consensus on the evaluation’s goals, methodology, and timeline.
The three broad goals on which the members agreed were as follows:

• To evaluate the overall effectiveness of the tax district and to determine if any
changes were needed in the specific services or programs provided;

• To evaluate the role and the structure of the nonprofit organization providing
these services (HSE); and

• To review the boundaries of the tax district and the appropriateness of the corre-
sponding tax rate.

The review team decided to use a variety of methods in conducting the CE,
including:

• Surveys of property owners, merchants, and HSE board members;
• Personal interviews with key stakeholders inside and outside the district, includ-

ing business and civic leaders and City Council members;
• Focus groups with residents, business owners, and merchants in the South End;
• Gathering of key financial and performance information about HSE; and
• Gathering of data on nationwide trends and best practices regarding organiza-

tions operating in special tax districts.

As indicated by the first three methods just listed, the evaluation was heavily
based on stakeholders’ perceptions of the district’s effectiveness. It was primarily
concerned with what the community wanted from the creation of the special tax
district and the accompanying nonprofit organization and whether the commu-
nity felt that those goals had been achieved. For example, the survey questions
were to be answered on an importance/satisfaction scale of 1 through 10 (1 =
lowest possible score; 10 = highest possible score). This allowed the evaluation
team to measure perceptions of matters such as “overall quality of life in the
South End area” and “level of services provided by the Municipal Service Dis-
trict tax revenue.” Data from the survey, plus a high number of survey and focus
group comments indicating a general lack of awareness of the services supported
by the special tax, led the review team to conclude that lack of communication
was one of the major issues HSE needed to address. From the local government

Berner and Bronson / PROGRAM EVALUATION IN LOCAL GOVERNMENT 317

perspective, collecting this information was time-consuming, but it enabled the
review team to obtain a detailed feedback on the district. In addition, the focus
groups and interviews represented a prime opportunity to raise awareness of the
district with selected stakeholders. The review team also obtained financial and
performance information. For example, the team learned that the assessed prop-
erty value in the district had increased 20 percent from 2000, compared with
about 4 percent growth citywide.

The review team met a total of ten times from September 2001 to March 2002,
starting with monthly meetings and shifting to biweekly meetings to discuss find-
ings and recommendations at the end of the process. The coauthor (representing
the Budget and Evaluation Office) facilitated the team and coordinated the logis-
tics of the review process. Although the team did not establish formal “ground
rules,” members were expected to conduct themselves in a manner conducive to
achieving a consensus-based outcome. Team meetings were characterized by open
discussion and mutual decisions by the entire group, rather than unilateral ac-
tions by the facilitator. All team members agreed to attend and actively partici-
pate in meetings, complete needed work assignments outside of meetings, and
openly discuss potential findings and recommendations. Reaching agreement on
the scope and methodology early in the process helped to set the tone of collabo-
ration from the beginning.

Once the review team collected all the data, this collaboration was put to the
test as the team spent several meetings reviewing the findings and developing
recommendations. This was the true test of the CE model—eight people repre-
senting diverse interests and backgrounds reaching consensus on a final report.
Before beginning the discussion of findings and recommendations, the team af-
firmed its goals of developing a consensus report and agreed on the need for a
common definition of “consensus.” After some discussion, the team’s definition
of consensus was that everyone on the team had to support each recommendation
and be able to implement it, even if they might not personally agree with it. This
agreement was a critical component for the review because one of the underlying
premises of the collaborative process was effective implementation of the recom-
mendations. The discussion about what consensus meant was challenging, as
team members wondered whether it was possible to reach consensus given the
diverse nature of the team. However, the team believed that the collaborative
process would allow expression of different viewpoints that would ultimately
lead to recommendations supported by the entire group.

Team members were engaged throughout the entire process by attending meet-
ings, completing work assignments, and actively participating in discussions.
The workload was equally shared by team members rather than solely by one
person, possibly because of the shared responsibility and leadership of team mem-
bers. The rapport developed earlier in the process was beneficial in keeping the

318 PPMR / March 2005

team together through several challenging conversations on a variety of potential
findings and recommendations.

Ultimately, the team agreed on 20 findings and recommendations to present to
the HSE board and City Council. Key findings from the review included:

• The South End experienced dramatic economic revitalization. In addition to the
property value growth mentioned previously, more than a dozen new shops,
restaurants, and other retail establishments had opened, bringing people into the
South End on a regular basis. There was little information, however, on the
direct economic effect of these new shops.

• The Charlotte Trolley was a critical force behind the success of the district. Tax
values along the trolley corridor now totaled $250 million, compared with $14
million before the trolley’s installation in 1998. HSE’s support of the trolley was
a strong factor in the trolley’s operation.

• HSE made significant strides in its first two years, particularly in providing some
services to and advocacy for the South End area. Examples included installing
decorative street signs, developing entertainment guides, and advocating the
district’s interests with key stakeholders. HSE was particularly noted for devel-
oping relationships with key stakeholders around the city, a task that is under-
stated and time-consuming but vital to the success of a neighborhood.

• HSE needed to focus more on providing tangible services to the district. Com-
munication was a primary example: The nonprofit had to do a much better job of
telling the South End story to visitors, stakeholders, and current and potential
developers or residents. In addition, the South End still was not a destination
location for Charlotte residents. More special events were needed that would
regularly draw people to the area. Finally, not enough had been done to identify
the South End. For example, the lack of clear boundaries marked by signage had
led to difficulties in people knowing exactly where the South End was.

• HSE needed to strengthen itself and its role in shaping the South End. The orga-
nization had to become more visible and active to serve as the “common voice”
for the district. Other similar tax districts, such as the center city of Charlotte,
achieved success primarily when one organization served as the official voice of
the area and developed a scope of work on behalf of the entire area.

• Finally, the city had invested significant resources and staff time in the improve-
ment of the South End. Individuals from multiple business units worked on
projects that affected South End every day, such as mass transit, pedestrian plan-
ning, and community safety. However, the lack of an organized forum in which
people could interact and share ideas hindered effective coordination of and
communication among these projects.

On the …

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH