Welcome to week #7 of the ECDG Blog Series! Following our ECDG Global Scanning Project conducted last year and presented at the American Evaluation Association Conference in Washington DC in October 2013 (http://www.ecdg.net/2013/11/19/preliminary-results-of-ecd-global-scan/), ECDG decided to develop a weekly blog series on some of the most interesting ECD themes that emerged in the course of all our interviews with ECD practitioners around the world. Our blog this week will cover a fundamental topic in ECD practice:
ECD in the Public Sector
The Parliamentarian’s Evaluation Seminar
As ECDG is a proud sponsor of the upcoming International Parliamentarian’s Evaluation Seminar, we thought it would be appropriate to discuss ECD in the public sector in this week’s blog.
The International Parliamentarians’ Evaluation Seminar, hosted by the Malaysian Evaluation Society (MES), will be held in conjunction with the 2014 MES International Evaluation Conference. The event will take place on 28 March and is organized in close collaboration with the South Asia Parliamentarians’ Forum on Development Evaluation. http://mes.org.my/mesconf2014
As organizer Aru Rasappan of MES noted, this initiative “is a major milestone in the evaluation field where key Parliamentarians from developing countries in the Asia-Pacific, Africa, and Middle East regions will be meeting in a one-day seminar to discuss various aspects of RBM and Evaluation with a focus on the challenges, utility, prospects, and way forward for the adoption, implementation, and utilization of RBM and evaluation in their countries.”
Through the networking and discussions that will take place, the Seminar provides an opportunity for the continued development of both national and regional evaluation capacity in the public sector.
ECD in the Public Sector
Developing evaluation capacity in the public sector has broad implications. We know that with limited information, high-level resource allocation policy decisions will most likely be based on partial knowledge, past experiences and the influences of special interests.
Jurisdictional and institutional oversight to monitor and evaluate public investment and service delivery increases transparency and can add a measure of control against fraud and corruption by those in power. This takes strong political will and the appropriate infrastructure with a well-designed evaluation system.
In the ECD Global Scan, we read cases in which public services were decentralized and thus public demand for these services was strengthened. This created greater accountability from local pressures for government to follow through on its commitments and carryout its policies. This was more than mere expenditure accountability. Numbers can be and often are very deceiving. Monitoring and evaluation policies that are integrated into government frameworks can increase the transparency of service delivery and lead to the improvement of services for a nation’s citizens. Let’s take a look at one of these cases.
M&E in the Government of Papua New Guinea
An ECDG Advisory Group member brought to our attention a report on monitoring and evaluation (M&E) in the public sector in Papua New Guinea. This was a complex, decades-long process involving a number of reforms and policy changes. We will mention some highlights and refer you to the comprehensive report (below) as we cannot do justice to this story within the confines of a blog.
As is the case of many countries, Papua New Guinea (PNG) institutions have had a poor record of sharing and managing information – across national agencies, vertically with subnational agencies and between the Government and other nongovernment partners. There were dual issues of improving the quality of the data collected (accurate, objective information) and having procedures in place for sharing. More than will, it was found that the lack of district information resulted from limited capacity, instruments and methodologies to gather information. A District Information Management System (DIMS) was established to identify infrastructure, human resources and fiscal gaps to better target funding.
In PGN, participatory processes of data collection across agencies were put into place to enhance cross-validation and a greater accuracy of information. It was to have the dual role of enhancing intergovernmental coordination as well as accountability. Having a system in place at the provincial level to collect, consolidate, and verify performance data should improve planning and policy-making throughout the system.
The Government of Papua New Guinea, through the help of its development partners, established a ‘fit-for-use’ M&E system to meet their own particular needs. An information management infrastructure was created to support planning and M&E frameworks with both statutory and policy mandates. For example, a District Information Management System (DIMS) that was created to provide adequate funding to assist districts to develop their infrastructure to enhance their financial and program management capacity. “The availability of DIMS data is now strengthening district and [local level government] LLG planning and decision making…and the way to link LLG needs to the national agenda of improving services, accountability and transparency.”
Whole-of-Government Capacity Building in Malaysia
Since the Malaysian Evaluation Society (MES) will be hosting the Parliamentarians Evaluation Seminar, we thought we would examine ECD within Malaysia’s public sector. MES has played a key role in providing training for government officials and helping to strengthen the evaluation capacity within the national government. The story of an effective collaborative tripartite partnership between the public sector, civil society, and the private sector can be found in the IOCE case study http://www.ioce.net/en/PDFs/national/2012/Malaysia_MES_CaseStudy.pdf
In Malaysia, unlike in other developing countries, the evaluation agenda was part of the government development agenda and budgeting system all the way back to 1969 when the government first introduced the Program Performance Budgeting System (PPBS) as a development management initiative.
Forty years later, the evaluation agenda continued to be strengthened and integrated into the main performance management agenda of government through both the medium term development plan and through the integrated budgetary approach. In 2009, the government adopted the Integrated Results Based Management (IRBM) system. The Ministry of Finance simultaneously adopted an outcome-based budgeting (OBB) approach.
The Ministry of Finance has worked in close collaboration with MES and a private evaluation research firm in developing a whole-of-government M&E system. The Ministry maintained a supportive policy environment, MES provided the evaluation institutional and technical support, and the private sector contributed the technical design, development, testing, and capacity building support for many of the tools and techniques used for evaluation promotion within the government. “In particular, the partnership was very fruitful and basically resulted in many new approaches and models to evaluation that the public sector would otherwise have perhaps taken years to accomplish.” This tripartite partnership has jointly organized bi-annual international evaluation conferences as well as public forums that have brought evaluation experiences and international examples to those in public service.
Unintended consequences of performance measurement in the public sector
In an interesting article The Performance Paradox In The Public Sector, Van Thiel and Leeuw pose a cautionary note regarding unintended consequences that can arise with the use of performance measurements. The authors use the term performance paradox to refer to “a weak correlation between performance indicators and performance itself. This phenomenon is caused by the tendency of performance indicators to run down over time. They lose their value as measurements of performance and can no longer discriminate between good and bad performers. As a result, the relationship between actual and reported performance declines…Not only can it take on many different forms, it can also be the unintended result of a number of variables, such as government demands, the type of task to be carried out, the vagueness or contradictory nature of policy objectives, and the capabilities of the policy-implementing organization.”
Citing unintended consequences of monitoring and investigating (auditing) performance, the authors state that “the use of performance indicators can inhibit innovation and lead to ossification, that is, organizational paralysis. Another effect is referred to as tunnel vision, which ‘can be defined as an emphasis on phenomena that are quantified in the performance measurement scheme at the expense of unquantified aspects of performance. Other unintended side effects are suboptimization, which is defined as ‘narrow local objectives by managers, at the expense of the objectives of the organization as a whole’ and measure fixation, ‘an emphasis on [single] measures of success rather than [on] the underlying objective’”.
Among many strategies to mitigate these unintended consequences, the authors mention leaving room for multiple interpretations of policy goals. “Funders, purchasers, providers, and consumers have different interests in policy implementation, leading to different emphases in performance assessment…In the public sector, consumers participate in the service delivery process, affecting output and outcome. Moreover, most products are intangible. Performance indicators should therefore reflect quality and reliability rather than ‘hard’ product attributes. Public services are not only about efficiency and effectiveness but also about justice, fairness, equity, and accountability.”
A final thought
Taking a systemic approach to institutionalize evaluation throughout the public sector is fundamental to ensuring its sustainability. Sustainability will be the theme for our next and final blog in the ECD Weekly Blog Series.
Strategies for Institutionalizing Evaluation in the Public Sector
1. Though responsibilities for policy implementation may be devolved to subnational levels, a strong and well-organized oversight structure must be in place to coordinate and monitor the execution of these policies.
2. A national M&E framework must be aligned with the national planning system.
3. M&E resources at the national level should keep pace with the increased demand to perform this function. Be conscious of equity (fair allocation of resources). Due to demographics, demand may not be equal among all regions and sub-regions.
4. Put in place incentives as well as requirements (carrots and sticks) to share data and information across agencies. Eliminate cumbersome protocols. Standardized and streamlined procedures will enhance collaboration and coordination.
5. Opportunities for collaboration should be available across ministries, agencies and departments as well as among evaluation staff in different levels of responsibility. In-house professional development activities should be provided. Participation in professional evaluation events (from national evaluation association conferences to local evaluation network gatherings) should be encouraged.
6. Look for opportunities to build the synergy of evaluation capacity between the public sector, private sector and civil society. Look for collaborative opportunities among government, national evaluation societies, universities and research institutes and private firms.
Cuong, MCM, and Fargher, MJ. (2007) Evaluation Capacity Development In Vietnam
An insightful report on ECD experiences and lessons learned in Vietnam for the 6th meeting of the DAC Network on Development Evaluation, 27–28 June 2007. http://www.oecd.org/dac/evaluation/dcdndep/38717773.pdf
Gachugu, M, Kapa, J., Kenny, L and Miranda, D. (2013). Public Sector M&E in PNG: Development and Challenges Retrieved Feb 15, 2014 from http://www.ecdg.net/2013/10/10/public-sector-me-in-png-development-and-challenges/
Parliamentarians Forum on Development Evaluation in South Asia
Facebook page. “This is a group of parliamentarians working on establishment of National Evaluation Policies in all South Asian countries.” https://www.facebook.com/pages/Parliamentarians-Forum-on-Development-Evaluation-in-South-Asia/310884062378855
Rasappan, A. (2012). Institutionalising Evaluation on a Whole-of-Government Basis: Role of an Evaluation Society – Malaysian Evaluation Society. Case Study of the Malaysian Evaluation Society (MES). http://www.ioce.net/en/PDFs/national/2012/Malaysia_MES_CaseStudy.pdf
Van Thiel, S. and Leeuw, F.L. (2002). The performance paradox in the public sector in Public Performance & Management Review (25,3) Retrieved Feb 24, 2014 from:
ECDG Note: As we remain committed to promoting ECD globally, we look forward to learning more about your ECD experiences. Therefore, please do not hesitate to contact us at the following e-mail addresses if you have any questions or comments:
Karen Russon (ECDG President): email@example.com
Michele Tarsilla (ECDG Vice-President): firstname.lastname@example.org