Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Independent Evaluation Information Fact Sheet

Developmental Disabilities Program Independent Evaluation Project (DDPIE)

September 30, 2009

What is the Developmental Disabilities Program Independent Evaluation (DDPIE) Project?

  • The DDPIE project is an independent evaluation of the impact of the state Developmental Disabilities (DD) Network programs1 on the lives of people with developmental disabilities and their families. The project is being funded by the Administration on Developmental Disabilities (ADD), located in the Administration for Children and Families within the U.S. Department of Health and Human Services.
  • The DDPIE project is divided into two phases. Phase I consists of two activities: (1) the development of tools for determining the impact of network programs on individuals with developmental disabilities and their families; and (2) a pilot study to test the accuracy, feasibility, and utility of these tools. Phase II of the evaluation will involve a full scale independent evaluation of the State DD network using the tools designed in Phase I. Westat, a private research firm in Rockville, Maryland, has been hired by ADD to conduct Phase I of the independent evaluation.

Phase I is taking place between September 30, 2005 and September 29, 2008.

What is the purpose of the DDPIE Project?

 

  • ADD already has a number of vehicles for monitoring the work of the three programs that comprise the DD Network. For example, ADD uses the annual review of Program Performance Reports (PPRs) to monitor individual grantees and their progress in the State. The Monitoring and Technical Assistance Review System (MTARS) process is a more in-depth monitoring activity that examines compliance with the Developmental Disabilities Assistance and Bill of Rights Act of 2000 (DD Act) and technical assistance needs.
  • The independent evaluation is something different. Its purpose is to demonstrate the effectiveness of the DD Network programs at the national level and the impact these programs have on the well-being of individuals with developmental disabilities and their families. The evaluation will go deeper into the programs to understand and report on the accomplishments of the grantees at the program and collaborative levels. It will also provide ADD with information to judge the effectiveness of its network of programs and policies. Thus, it will also serve as a decision-making tool for ADD to promote accountability to the public.

Why is the evaluation being conducted?

  • The independent evaluation is a response to accountability requirements for ADD. The need for accountability is identified in the DD Act, the Government Performance and Results Act (GPRA) of 1993, and the Program Assessment Rating Tool (PART), administered by the Office of Management and Budget.
  • In order to determine the extent to which key programs comprising the DD Network have been responsive to the purpose and principles of the DD Act, Section 104 of the DD Act requires the development of accountability indicators of progress. 
  • In addition to meeting the requirements of the DD Act, program evaluation has become essential for all federal agencies as a result of GPRA and the PART. GPRA focuses government decision-making and accountability on performance, results, and program quality and away from counting activities (e.g., grants dispensed or inspections made) (www.whitehouse.gov/omb/mgmt-gpra/index-gpra). In addition to GPRA requirements, all federal agencies receive a PART review which is intended to examine all factors that affect and reflect program performance in terms of impact.
  • One aspect of the PART review seeks documentation that independent evaluations of sufficient scope and quality are conducted on a regular basis or as needed. The purpose of these evaluations is to support program improvements and evaluate effectiveness and relevance. The PART guidelines also state that the most significant aspect of program effectiveness is impact—the outcome of the program which otherwise would not have occurred without the program intervention.
  • The DDPIE project meets the requirements of PART by providing a non-biased method of evaluating the effectiveness, including the impact, of the DD Network in serving the DD Act's target audience—people with developmental disabilities and their families.

What do we mean by "impact" in the context of this evaluation?

The term "impact" refers to the implied influence that a program or activity has on specific outcomes. In the context of this evaluation, we will consider impact to be the extent to which the DD Network programs influence outcomes that are directed toward people with developmental disabilities and their families. The evidence required to identify impact will be part of the contents of the tools developed in Phase I of the independent evaluation.

What is the basic design of the evaluation?

  • The evaluation is a performance-based approach to measuring impact. The evaluation will be based on a framework of indicators (structure, process, outputs, and outcomes) and complementary benchmarks and performance standards. Performance standards will be used as a point of reference to evaluate the indicators, as measured. The key functions of each DD Network program will have their own set of benchmarks, indicators and performance standards, as well as performance levels to denote the level at which each benchmark and performance standard is being met. The benchmarks, indicators, performance standards, and performance levels will be incorporated into measurement matrices. Data will be collected to measure each indicator contained in the measurement matrices. There will be a separate measurement matrix for each key function within each DD Network program, as well as for the collaboration among the three programs. With the use of measurement matrices that contain these benchmarks, indicators, performance standards, and performance levels, a quantitative measure of program impact can be calculated.

  • Phase I of the evaluation includes both the design and testing of the measurement matrices and data collection tools to measure each indicator.
What is a key function?

Although each of the three DD Network programs has the same long-term goals that emanate from the DD Act, each program functions differently to achieve those goals. With assistance from program Working Groups, Westat has characterized the key functions of each DD Network program. Measurement matrices are being organized around the key functions in each program.

What are indicators?

An indicator is something that is looked at to determine whether "action" needs to be taken. In the context of public health, for example, an indicator might be an age-specific mortality rate for a particular disease. An example of an indicator for the DD Council program might be the extent to which a new policy is known and understood.

There are four types of indicators—structure, process, outputs, and outcomes. These four types of indicators comprise the framework of indicators. Each type is described below.

  • Structures (or inputs) are those resources that are needed to set processes in motion and keep them running. Structural indicators are used to assess the DD Network's capability to achieve goals of the DD Act through adequate and appropriate settings and infrastructures. They include, but are not limited to, staffing, financial resources, information systems, facilities and equipment, and governance and administrative structures.

  • Processes are the unique methods, practices, or procedures that drive the creation of outputs and the ultimate success of the key function. Process indicators (e.g., conducting a comprehensive review) are used to assess the content and quality of the DD Network's activities, procedures, methods, and intervention supporting practices aligned with the purpose and principles of the DD Act.

  • Outputs, often referred to as "products," are the "units" produced by processes supported by given inputs. Outputs will vary according to program and key function. The outputs for P&A individual advocacy, for example, would be different than the outputs for UCEDD dissemination. With assistance from the Working Groups, the outputs for each key function of each DD Network program have been identified. Output indicators are used to assess immediate results of the DD Network's policies, procedures, and services that can lead to achieving the purpose and principles of the DD Act.

  • Outcomes are the intended results of creating certain outputs or products. Short-term outcomes relate directly to program outputs. For example, if a DD Council output isinformed policymakers, then the short-term outcome might be changes to policies for delivering services to individuals with developmental disabilities and families. Short-term outcome indicators are what will be measured to determine the extent to which expected outcomes are being met.

In the context of this evaluation, long-term outcomes represent the overarching goals of the DD Act and are the same for each DD Network program. Examples of long-term outcomes might consist of the following:

  • Informed choices

  • Employment

  • Housing

  • Health care

  • Transportation

  • Integrated community living

  • Freedom from abuse, neglect, financial and sexual exploitation

  • Freedom from violation of legal and human rights

How are you defining performance standards for this evaluation?

  • Evaluation in its most basic form is a simple comparison of "what is" to an expectation of "what should be." The performance standard is what should be, and through the examination of new or existing data on what is, the evaluation process will determine the extent to which the program meets each standard for each program function.
  • Standards will be designed around the characteristics of effective DD Network programs. Examples of these characteristics might include:
  1. Data-driven program goals (i.e., program goals developed partially through the use of state and program statistics)
  2. Cultural competence
  3. Sound governance
  4. Rigorous representation of clients
  5. Effectively executed key functions
  6. Efficient programs

What are performance levels?

Performance levels denote the level at which a standard is being met. There will be three or four performance levels (to be determined) for each performance standard (e.g., not developed; limited development; and adequate development). For example, if it is determined that a standard is thatamong participants in Council-supported educational and training programs, at least 80% are satisfied or greatly satisfied with the clarity of the materials used, the performance levels could get established as:

  • Not developed—Among those participants in Council-supported education and training programs, 60% or fewer are satisfied or greatly satisfied with the clarity of the materials used.
  • Limited development—Among those participants in Council-supported education and training programs, 61–79% are satisfied or greatly satisfied with the clarity of the materials used.
  • Adequate development—Among those participants in Council-supported education and training programs, 80% or more are satisfied or greatly satisfied with the clarity of the materials used.

What are measurement matrices?

Measurement matrices are the tools in the independent evaluation that will be used to organize each program's key functions, the framework of indicators within each key function, benchmarks, performance standards that correspond to each indicator, and performance levels for each performance standard.

Where will the contents of the measurement matrices come from?
  • With assistance first from Working Groups, all DD Network programs, and Validation Panels, Westat will develop Benchmarks, indicators and performance standards during Phase I of the DDPIE project. Working Groups have assisted Westat in characterizing the key functions of each program. Both Working Groups and Validation Panels represent stakeholders from the three network programs. All Working Groups were formed in consultation with ADD and include individuals from DD Network programs around the United States. Validation Panels will be formed in consultation with the Advisory Panel and ADD.
  • ADD also instructed Westat to set up an Advisory Panel that would meet throughout Phase I of the project and provide advice and feedback on the process for developing the measurement matrices, the matrices themselves, data collection, and validation.

What is the composition of the Advisory Panel?

The Advisory Panel consists of 11 members. Members include staff and Council members from the DD Network programs, advocates, ADD staff, a staff member from the Center for Mental Health Services (SAMHSA) in the U.S. Department of Health and Human Services, as well as others with evaluation research background and expertise in issues surrounding the developmental disability community (e.g., staff from the Federation for Children with Special needs, the National Association for the Education of Young Children, and the American Association of People with Disabilities).

How were the Working Groups identified?

  • Development of the measurement matrices for DD Network programs requires a deep understanding of the programs themselves. Thus, the process for developing these matrices requires the efforts of Working Groups, consisting of stakeholders who understand the programs well.
  • There are four Working Groups—one for each DD Network program, and one to assist in developing measurement matrices that reflect the collaboration of the three programs. Working Group members were identified by ADD using pre-determined criteria for selection, including level of funding allotment (for the P&A and DD Council programs); region of the country; interest in issues related to measurement of program impact; and availability. It was also important to identify Working Group members from state programs that are both inside and outside state government (for the P&A program), with a range of designated state agencies (for the DD Council program), and that represent a range of university departments or schools (for the UCEDD program). ADD and Westat also wanted to ensure that Working Group members had the necessary expertise about the programs to be able to assist in developing relevant indicators and standards. Therefore, Working Group members are all senior staff or DD Council members (e.g., Executive Directors, Directors, an Associate Director and a Chair of a DD Council).

How do the Working Groups operate?

Although the meeting schedule for each Working Group has varied, the nature of all Working Group activities has been the same. Working Group activities consisted of telephone and in-person meetings. To reduce the amount of travel required by Working Group members, in-person meetings have taken place at the time and place of each program's national meetings.

What process did the Working Groups follow?

Westat organized a series of meetings to take place with each of the Working Groups. Materials were developed to guide productive discussion at each of those meetings. Working Group meetings consisted of discussion of key concepts and issues, including the following:

  • What are the key functions of the P&A/UCEDD/DD Council?
  • What are the expected outcomes of each key function?
  • What are the characteristics of a successful P&A/UCEDD/DD Council?
  • What are the characteristics of successful collaboration?
  • By what standards does the P&A/UCEDD/DD Council measure its own success?
  • How does one characterize and identify the impact of each DD Network program on:
    • The ability of individuals with developmental disabilities to make choices and exert control over the services, supports, and assistance they use;
    • The ability of individuals with developmental disabilities to access services, supports, and assistance in a manner that ensures that such an individual is free from abuse, neglect, sexual and financial exploitation, violation of legal and human rights, and the inappropriate use of restraints and seclusion; and
    • The extent to which DD Network programs collaborate with each other to achieve the purpose and principles of the DD Act?
  • What are the data requirements and measurement tools necessary to measure the indicators that are developed? What data already exist and can be used for this evaluation?
  • What data collection methodologies can be developed to ensure reliability and validity of data to measure the level at which DD Network programs attain standards, while at the same time recognize the importance of reducing the burden of data collection?
  • Answers to these and many other questions greatly assist in the development of reliable, valid, feasible, and useful benchmarks, indicators, performance standards, performance levels, and measurement matrices that can be shared with DD Network stakeholders and Validation Panels for further consideration and validation.

What are Validation Panels?

In addition to the Advisory Panel and Working Groups, Westat will establish four Validation Panels—one for each DD Network Program and one for Collaboration. The role of the Validation Panels is to review and comment on the contents of the measurement matrices. Panels will be comprised of DD network program staff and DD Council members, ADD staff, and others with evaluation research background and expertise in issues facing the developmental disabilities community.

Where does collaboration fit in?

The DD Act specifies that accountability measures must include indicators on collaboration among the three DD Network programs (Section 104(a)(3)(D)(iii)). ADD specifically requested that the independent evaluation include separate matrices that address collaboration among the three DD Network programs. Thus, collaboration measurement matrices are being developed with assistance from a collaboration Working Group and will be validated by a collaboration Validation Panel


Last modified on 03/06/2017


Back to Top