President George W. Bush: Resources for the President's Team The White House
President George W. Bush meets with Dan Bartlett, center, and Josh Bolten in the Oval Office Jan. 9, 2003.  White House photo by Eric Draper.
The Deputy Director for Mgmt
PMA updates, best practices, and general information.
Grading Implementation of the PMA.
Human Capital
Initiative updates, best practices, and general information.
Commercial Services Management
Initiative updates, best practices, and general information.
Improving Financial Performance
Initiative updates, best practices, and general information.
Initiative updates, best practices, and general information.
Performance Improvement
Initiative updates, best practices, and general information.
Sharing Best Practices
Stories of achieving breaktrough results in government.
The Five Initatives

Performance Improvement

Moving Forward With the Program Assessment Rating Tool

Well, all is now revealed. The FY 2004 Budget published ratings and detail assessments of 234 federal programs -- approximately one-fifth of the entire federal government, representing $494 billion in spending. It is the most sweeping systematic assessment of federal programs in history. (The detail worksheets we used to complete the Performance Assessment Rating Tool - the PART - are available on line at /omb/budget/fy2004/pma.html).

While the ratings included in last year's budget went largely unnoticed outside the government, the magnitude of this year's effort, and its influence on the FY 2004 budget decisions, make it quite clear that the Administration is serious about achieving results and being accountable to the taxpayers for their dollars. The wealth of information generated by using the ratings informed some budget decisions.

For example, its PART shows that the Economic Development Administration is meeting or exceeding its targets for job creation and private sector investment. Its budget was raised by $16 million to $364 million. Likewise, although the PART showed that the Patent and Trademark Office's performance was simply adequate, its budget was increased by $70 million, to $1.26 billion, for the express purpose of reducing error rates and waiting time for patent applications.

In some programs the PART rated ineffective, and funding was cut. For example, funding for HHS' Health Professions program, rated ineffective, was cut by $13 million. Those funds were redirected to activities more capable of getting nurses to underserved facilities.

But these clear examples are not the rule. The jury is still out on how we did. But the PART is the beginning of our attempt to provide an honest, unbiased view of how well federal programs are performing and whether managers are accountable for performance.

We will assess the performance of the same programs we examined this year again next year. We'll also examine another 20% of programs, with the goal of examining 100% of federal programs in five years. Our analysis will get better. It will give us not only better information about the program's performance, but also a sense of how programs are performing over time. We will define the right measures of performance for those programs now rated "results not demonstrated."

With better performance information, we can make more budget decisions based on the PART. We will get closer to using performance information to end or reform programs that either cannot demonstrate positive results or are clearly failing and put resources in programs that can prove they are successful.

To help us improve the PART, we've asked anyone and everyone for suggestions on how to make the questionnaire and the ratings of specific programs more useful. We will spend a considerable amount of time working to improve program performance measures. Some of the other issues we need to address for the next cycle include:
  • Increasing consistency in the how programs are judged against the rating criteria
  • Defining "adequate" performance measures. OMB will be hosting training on how to select good performance measures, but what else can be done to improve the development of consistently better measures?
  • Minimizing subjectivity in the application of the PART
  • Measuring progress toward results
  • Institutionalizing program ratings
  • Considering programs in a broader context
  • Increasing the use of rating information

We welcome your suggestions. Comments can be sent to

We've done something unprecedented this year, but it is only a beach head. While others test our past work, we will be moving ahead. Think about what additional programs could benefit most from being assessed this coming year. We will need to decide on this group quickly so that the assessments are underway by summer.

Yours truly,

Marcus Peacock

The Five Initatives:
  | Privacy Statement