President  |  Vice President  |  First Lady  |  Mrs. Cheney  |  News & Policies 
History & ToursKids  |  Your Government  |  Appointments  |  JobsContactGraphic version


Email Updates  |  Español  |  Accessibility  |  Search  |  Privacy Policy  |  Help

Printer-Friendly Version
Email this page to a friend

Barriers: A Federal System Inhospitable to Faith-Based and Community Organizations
The Federal grants system is intended to put taxpayer dollars to the most effective use by enlisting the best nongovernmental groups to provide various social services, either through discretionary grants (also called competitive grants) awarded directly by Federal officials or through formula grants (including block grants) administered by State and local governments. The funds should go to the providers who can provide the most effective assistance and who can boast the best civic outcomes.

The Federal Government, however, has little idea of the actual effect of the billions of social service dollars it spends directly or sends to State and local governments. The policies and practices of Federal grants programs too often make it difficult or impossible for faith-based and grassroots groups to gain support, even though they may have superior results in lifting lives and healing distressed neighborhoods.

Billions of Federal Dollars Spent, Little Evidence of Results
The Federal Government spends billions of dollars annually to assist needy families, individuals, and communities, often using the funds to support services provided by nongovernmental organizations. Although Federal program officials monitor nonprofit organizations, State and local governments, and other groups that receive the funds to ensure that they spend Federal money for designated purposes and without fraud, Federal officials have accumulated little evidence that the grants make a significant difference on the ground.

Routinized Granting Without Performance Monitoring
In some Federal discretionary programs, a small number of organizations perennially win large grants, even though there is little empirical evidence substantiating the success of their services. For example, in the Labor Department's Senior Community Service Employment Program, the same 11 large organizations have ranked among the top-10 grant recipients over the past five years. In addition, since 1984 the Department's Women's Bureau has annually awarded a sole-source grant to the same organization. Similarly, in HHS's Consolidated Health Centers program, the same 12 organizations appear on the lists of the 10 largest grantees over the past five fiscal years; in the Runaway and Homeless Youth Program, only 17 organizations appear on the top-10 lists over the same period.

These apparent Federal Grant monopolists may rank head and shoulders above the rest in terms of quality and performance, but only rarely are Federal programs and grantees examined to determine whether taxpayer funds achieve the desired results. Large grantees are audited annually for their use of Government funds (if they receive more than $300,000 annually from all Federal sources); and some programs, such as Head Start and Community Services Block Grants, require some form of impact evaluation. Although the Federal Government can ensure that funds are not spent on unauthorized purposes, it cannot ensure that the expenditures have the intended results. According to the OMB survey, despite the billions of dollars the sample of programs has distributed in discretionary and formula grants over the past 5 years, fewer than one in five of the programs has received a General Accounting Office or Agency Inspector General's review to analyze actual performance and results. Moreover, virtually none of the programs has ever been subjected to a systematic evaluation of their performance that meets rigorous (or, in most cases, even rudimentary) evaluation research standards.

These Federal programs may be doing significant good; and the grantees that routinely win renewed support may be the best available. However, in the absence of meaningful performance reviews, agencies have no concrete basis for concluding so. Although routinized grant-making is administratively easier than competitive grant-making, such a grant-making process poses a high barrier to potential new entrants who, in fact, may be better at serving needy citizens and their neighborhoods.

Some critics of expanded Federal collaboration with faith-based and community-based organizations complain that there is little proof that these organizations are effective or have the capacity to manage large-scale social service programs. However, as the OMB survey ironically reveals, the Federal Government routinely awards billions in taxpayer support to organizations whose own efficacy and cost-effectiveness have not been validated by careful studies. This record indicates the need for an across-the-board emphasis on demonstrating actual efficiency of the programs that government funds.

The Impotence of GPRA in Determining Whether Programs Fly or Flop
Nearly a decade ago, Congress mandated a reform of Federal Government operations to produce on-the-ground changes. The 1993 Government Performance and Results Act (GPRA) requires Federal departments to prepare strategic plans and annual performance reports that look beyond mere gross measures of agency activity (i.e., grants awarded, hours of training given) to measures that examine actual changes in the circumstances of communities and families toward whom the Government activity is directed. The goal of this reform is to identify which Federal programs actually make a meaningful difference.

To date, GPRA has had little positive impact on Government programs, and the reports from the Centers confirm this gloomy evaluation with respect to the measuring of social service grants to faith-based organizations.

Despite GPRA and its promise of outcome-based grant-making, the Federal Government has made scant progress in showcasing program performance and managing for results. Too often, GPRA has devolved into a rote paperwork assignment that leverages little real change and influences few officials. GPRA's paramount goal - to herald high-performing programs and spotlight low-performing ones - has barely moved the needle in affecting the real world of making Federal programs work better. Indeed, a recent GAO report examining GPRA compliance showed that, in the 28 Federal agencies surveyed, only in 7 did a majority of managers say they used performance information in setting program priorities, adopting new approaches, allocating resources, coordinating program efforts, or setting job expectations for employees. It gets worse: the GAO survey shows that results-based management under GPRA has actually decreased in recent years.8

Next: "Barriers to Faith-Based Organizations Seeking Federal Support"


8 Stephen Barr, "Survey of Supervisors Finds Little Movement Toward `Managing for Results'," The Washington Post (June 10, 2001), p. C-2; GAO, "Managing for Results: Federal Managers' Views on Key Management Issues Vary Widely Across Agencies," May 2001 (GAO-01-592).


Printer-Friendly Version
Email this page to a friend

  |   Issues Budget Management Education Energy Health Care Homeland Security Hurricane Recovery Immigration Jobs & Economy Medicare National Security Pandemic Flu Patriot Act Renewal in Iraq Social Security More Issues »   |     |   News Current News Press Briefings Proclamations Executive Orders Radio RSS Feeds      |   News by Date   |   June 2006   |   May 2006   |   April 2006   |   March 2006   |   February 2006   |   January 2006   |   December 2005   |   November 2005   |   October 2005   |   September 2005   |   August 2005   |   July 2005   |   June 2005   |   May 2005   |   April 2005   |   March 2005   |   February 2005   |   January 2005   |   December 2004   |   November 2004   |   October 2004   |   September 2004   |   August 2004   |   July 2004   |   June 2004   |   May 2004   |   April 2004   |   March 2004   |   February 2004   |   January 2004   |   December 2003   |   November 2003   |   October 2003   |   September 2003   |   August 2003   |   July 2003   |   June 2003   |   May 2003   |   April 2003   |   March 2003   |   February 2003   |   January 2003   |   December 2002   |   November 2002   |   October 2002   |   September 2002   |   August 2002   |   July 2002   |   June 2002   |   May 2002   |   April 2002   |   March 2002   |   February 2002   |   January 2002   |   December 2001   |   November 2001   |   October 2001   |   September 2001   |   August 2001   |   July 2001   |   June 2001   |   May 2001   |   April 2001   |   March 2001   |   February 2001   |   January 2001
Interact
Ask the White House White House Interactive   |   Appointments Nominations Application

 

 

 

     |   Federal Facts   |   Federal Statistics      |   West Wing   |   History