Performance Indicators for ADR Program Evaluation
Electronic Guide to Federal Procurement ADR
Dispute Systems Design Working Group
Administrative Conference of the United States
November, 1993
Background Information
Evaluation is a key to determining whether an alternative dispute resolution (ADR) program has met or is successfully meeting its goals. It may also be used to assess the need for changes in the day-to-day administration of the program. Taking the time up front to carefully plan and design an evaluation of your ADR program will help to ensure that relevant information will be available to managers and decisionmakers to assess the effectiveness of the ADR program and to determine whether the program should be continued and/or modified.
This document is intended to serve several purposes. Ultimately, it will become part of a handbook on evaluation of Federal agency ADR programs. The handbook will address a whole range of issues that arise in the context of ADR program evaluation, including planning, designing, and implementing evaluations. In the meantime, this document is intended to provide initial guidance on the identification of both program goals and program measures. Program goals and measures are really two sides of the same coin. Goals are what your program seeks to accomplish; measures are used to determine whether those goals have been met.
The material contained in this document can be used in conjunction with the Administrative Conference’s Dispute Systems Design Working Group’s Pre-design Organizational Checklist to stimulate ideas about ADR program goals. It can also be used more directly to identify possible measures of success for ADR programs. It can therefore be used at both the “front” and “back” ends of program planning and implementation (design and evaluation, respectively).
Evaluations are conducted for different reasons and take different forms. Evaluation may be aimed at (1) determining whether the outcomes of a program are consistent with the program’s declared goals, (2) determining whether the program is running the way it was intended to, and/or (3) determining whether changes in the program would improve its usefulness. Evaluations may be comprehensive in nature, rely on a significant degree of internal or external professional evaluation expertise, involve a great deal of planning, and take a rather lengthy time to complete. At the other end of the spectrum, evaluations may be aimed at providing more of a “snapshot” of where a program is at, at examining a particular area within a program, or at capturing the impact of specific changes in program coverage or administration. They may involve less planning and outside evaluation expertise, and take a relatively short period of time to complete. Or, the nature and form of an evaluation may fall somewhere in between these two ends of the spectrum. The reasons for which an evaluation is conducted, and the form it takes, will vary from agency to agency, and from time to time, depending on evaluation needs and constraints (e.g. budgetary), and each agency’s particular mission/culture. Evaluations need to be designed to be responsive to managers and decisionmakers with different needs and interests.
The list of indicators below is divided into two categories, one dealing with program effectiveness(i.e. whether or not a program is meeting its goals), and one dealing with program design and administration(i.e. whether or not a program is being administered as it should be). (NOTE: the terms program measures and performance indicators are used interchangeably throughout this document to refer to specific ways of examining program effectiveness or administration.) These categories are not mutually exclusive, and the list itself is intended to be as comprehensive as possible, in order to cover to a wide range of agency interests/needs. It is unlikely that all of the measures listed below would apply to any single evaluation; rather, some will apply in some cases and others, not. Each measure is followed by one or more questions intended to further illustrate the kinds of evaluation issues an agency may wish to pursue.
The list, overall, is intended simply as a “sampling” of measures or indicators from which agencies may pick and choose, as appropriate, as they seek to formulate ADR program goals and to identify possible measures of program effectiveness.
List of Indicators
I. Program Effectiveness (Impact)
Program effectiveness measures or indicators are aimed at assessing the degree to which an ADR program is meeting its goals. More specifically, program effectiveness measures are used to examine the impact of the program on users/participants, overall mission accomplishment, etc. In the case of ADR, an agency may, for example, be interested in looking at whether the use of ADR reduces the time it takes to resolve cases/disputes.
Effectiveness indicators should correspond directly to the goals or objectives of an ADR program. For example, if a goal of your agency’s ADR program is to reduce the backlog of cases, than the impact of the program on case disposition time needs to be assessed.
The indicators in the effectiveness category are further divided into three subcategories: efficiency, effectiveness, and customer satisfaction.
A. Efficiency
1. Cost
Cost to the Government of using ADR vs. traditional dispute resolution processes (e.g. negotiated settlements, agency findings, litigation).
Is the use of ADR more or less costly than the use of traditional means of dispute resolution? (Cost may be measured in staff time, dollars, or other quantifiable factors.)
Cost to disputants of using ADR vs. traditional dispute resolution processes.
Is the use of ADR more or less costly than the use of traditional means of dispute resolution? (Cost may be measured in terms of staff time, dollars, or other quantifiable factors.)
2. Time
Time required to resolve disputes using ADR vs. traditional means of dispute resolution.
Are disputes resolved more or less quickly using ADR processes, compared to traditional means of dispute resolution? Such factors as administrative case processing, participant preparation, dispute resolution activity timeframes, and/or days to resolution may be considered.
B. Effectiveness
1. Dispute Outcomes
Number of settlements achieved through the use of ADR vs. traditional dispute resolution processes.
Does the use of ADR result in a greater/fewer number of settlements?
Number of cases going beyond ADR steps.
Does the use of ADR result in a greater/fewer number of investigations, further litigation activities, etc.?
Nature of outcomes.
What impact does the use of ADR have on the nature of outcomes, e.g. do settlement agreements “look different,” as in terms of the agreement or monetary amounts agreed upon? Do settlement agreements reflect more “creative” solutions?
Do outcomes vary according to the type of ADR process used?
Relationship, for cases selected for ADR, between dispute outcomes and such factors as complexity or number of issues, or number of parties.
Is there any relationship, where ADR is used, between the complexity and/or number of parties/issues in a case and the outcome of the case?
2. Durability of Outcomes
Rate of compliance with settlement agreements.
Does the use of ADR result in greater/lesser levels of compliance with settlement agreements?
Rate of dispute recurrence.
Does the use of ADR result in greater/lesser levels of dispute recurrence, i.e. recurrence of disputes among the same parties?
Impact on program/organizational environment.
Does use of ADR have the effect of improving the work environment, e.g. reducing the level of conflict and improving participant relationships–thereby contributing positively to mission accomplishment?
3. Impact on Dispute Environment
Size of case inventory.
Does the use of ADR result in an increase/decrease in case inventory?
Types of disputes.
Does the use of ADR have an impact on the types of disputes that arise?
Negative impacts.
Does the use of ADR have any negative consequences, e.g. an inability to diagnose and correct systemic problems/issues?
Timing of dispute resolution.
Does the use of ADR affect the stage at which disputes are resolved?
Level at which disputes are resolved.
Does the use of ADR have any impact on where and by whom disputes are resolved?
Management perceptions.
What are the quantitative and qualitative effects of using ADR on management, e.g. how does the use of ADR impact upon allocation and use of management time and resources? Does the use of ADR ease the job of managing?
Public perceptions.
Is the public satisfied with ADR outcomes? Is there any perceived impact of use of ADR on effectiveness of the underlying program? (NOTE: “Public” may be defined differently, depending on the particular program/setting involved.)
C. Customer Satisfaction
1. Participants’ Satisfaction with Process
Participants’ perceptions of fairness.
What are participant perceptions of access to ADR, procedural fairness, fair treatment of parties by neutrals, etc.?
Participants’ perceptions of appropriateness.
What are participant perceptions of appropriateness of matching decisions (i.e. matching of particular ADR processes to particular kinds of disputes or specific cases)?
Participants’ perceptions of usefulness.
What are participant perceptions of the usefulness of ADR in the generation of settlement options, the quantity and reliability of information exchanged, etc.?
Participants’ perceptions of control over their own decisions/”destiny.”
Do participants’ feel a greater or lesser degree of control over dispute resolution process and outcome through the use of ADR? Is greater control desirable?
2. Impact on Relationships Between Parties
Nature of relationships among the parties.
Does the use of ADR improve or otherwise change the parties’ perceptions of one other? Is there a decrease/increase in the level of conflict between the parties? Are the parties more or less likely to devise ways of dealing with future disputes? Are the parties able to communicate more directly/effectively at the conclusion of the ADR process and/or when new problems arise?
3. Participants’ Satisfaction with Outcomes
Participants’ satisfaction with outcomes.
Are participants satisfied/unsatisfied with the outcomes of cases in which ADR has been used?
Participants’ willingness to use ADR in future.
Would participants elect to use ADR in a future dispute(s)?
II. Program Design and Administration (Structure and Process)
How a program is implemented will have an impact on how effective a program is in meeting its overall goals. Program design and administration measures or indicators are used to examine this relationship and to determine how a program can be improved.
The indicators in the program design and administration category are further subdivided into three subcategories: program organization, service delivery, and program quality.
A. Program Organization
1. Program structure and process.
Are program structure and process consistent with underlying laws, regulations, executive orders, and/or agency guidance?
Do program structure and process adequately reflect program design? Are program structure and process adequate to permit appropriate access to and use of the program?
2. Directives, guides, and standards.
Do program directives, guides, and standards provide staff/users with sufficient information to appropriately administer/use the program?
3. Delineation of responsibilities.
Does the delineation of staff/user responsibilities reflect program design? Is the delineation of responsibilities such that it fosters smooth and effective program operation?
4. Sufficiency of staff (number/type).
Is the number/type of program staff consistent with program design and operational needs?
5. Coordination/working relationships.
Is needed coordination with other relevant internal and external individuals and organizations taking place? Have effective working relationships been established to carry out program objectives?
B. Service Delivery
1. Access and Procedure.
Participant access to ADR option.
Are potential participants made aware of the ADR program? Is the program made available to those interested in using ADR?
Relationship between participant perceptions of access and usage of ADR.
What impact does participants’ perceptions about the availability of the program have on the levels of program usage?
Participant understanding of procedural requirements.
Do program users understand how the program works? Did they feel in advance that they were comfortable with the process?
Relationship between procedural understanding and rates of usage.
Is there any relationship between the level of participant understanding and the degree of program use, e.g. is a lack of participant understanding serving as a disincentive to using the ADR program?
2. Case Selection Criteria.
Participant perceptions of fairness, appropriateness.
Do participants feel that appropriate types of cases are being handled in the ADR program? Do participants or non-participants feel that the criteria for which cases are eligible for ADR are fair? Are cases being sent to the ADR program at the appropriate dispute stages?
Relationship between dispute outcomes and categories of cases.
Is there a correlation between the [nature, size, types of disputants, and/or stage of the dispute] of cases and the outcome of the dispute? Are certain types of cases more likely to be resolved through ADR than others?
C. Program Quality
1. Training.
Participant perceptions of the appropriateness of staff and user training.
Do participants feel that they were provided with sufficient initial information and/or training on how to use the ADR program/process? Do they feel that program staff had sufficient training and/or knowledge to appropriately conduct the ADR program?
Relationship between training variables and dispute outcomes.
Is there a relationship between the type/amount of training (for participants and/or staff) and dispute outcomes?
2. Neutrals.
Participant views of the selection process.
Are participants satisfied with the manner in which neutrals were selected and assigned to cases? Were they involved in the selection decision? If not, did they feel they should be?
Relationship between participant views of the selection process, perceptions of neutral competence and objectivity, and dispute outcomes.
Is there any relationship between participant views about the selection process and dispute outcomes? How do these views affect participants’ assessment of the competence and neutrality of neutrals?
Participant perceptions of competence (including appropriateness of skill levels/training).
Do participants feel that neutrals were sufficiently competent? Do participants feel that neutrals were sufficiently well-trained? Do participants feel that more or less training was needed?
Participant perceptions of neutrality/objectivity.
Do participants feel that neutrals were sufficiently objective? Do participants feel that neutrals were fair in their handling of the dispute?
Relationship between perceptions of neutral competence and neutrality and dispute outcomes.
Do participants’ perceptions of the skills and/or objectivity of neutrals have any impact on the outcome of the dispute?
D. Other Specific Program Features
Every ADR program is unique. Those requesting and/or conducting an evaluation may want to consider examining other aspects of the ADR program. These unique features may relate to the design of a program, who was and continues to be involved in program design and administration, etc. Each is likely to have at least some impact on service delivery and the quality of the program and should be considered for inclusion in either a comprehensive or selected evaluation of the program, as appropriate.
This document was developed by the Evaluation Subgroup of the Dispute Systems Design Working Group, Administrative Conference of the United States (ACUS):
Deborah Lesser, Department of Health and Human Services (Co-Chair)
Nancy Miller, ACUS (Co-Chair)
Peter Bloch, Nuclear Regulatory Commission
Dick Cocozza, Western States Senate Coalition
Anne Larkin, Department of Defense (OSD/WHS)
Leslie Van der Wal, Internal Revenue Service