Thanks to ECDG’s Advisory Group member, Scott Bayley, for sharing this thoughtful paper on leadership’s critical role in support of evaluation. Senior leaders have many opportunities to demonstrate their support for EVALUATION (using evidence to inform decision making at all stages of the program management cycle to drive continuous improvement): By making statements advocating the benefits of EVALUATION. By consistently modelling/demonstrating a commitment to EVALUATION in their own work. Actively seeking performance feedback and using this to drive continuous improvement. Leaders need to give performance matters their constant attention. Personal attention at the operational level. Through their support for organizational EVALUATION policies, systems, guidelines, tools, and actual practices/culture. By championing an explicit EVALUATION capacity building program that is adequately resourced, implemented and monitored/evaluated. By initiating engagement with key external stakeholders and building a consensus about performance expectations. By publically participating in EVALUATION training and coaching activities. By taking EVALUATION considerations into account when making resource allocation decisions. Supporting the creation of operational communities of practice. Through the questions that senior leaders ask of staff and colleagues at meetings. Do conversations focus on undertaking activities/disbursing $ or on achieving/improving results? Through their expectations of their own staff. By emphasizing learning and improvement,
Thanks to ECDG’s Advisory Group member, Scott Bayley, for this discussion-provoking blog on a comparison of two primary though often incompatible evaluation purposes: accountability and learning. Evaluation can be defined as the systematic empirical examination of a program’s design, implementation and impacts with a view to judging a program’s merit, worth and significance. Evaluation reduces uncertainty for stakeholders, aids decision making, and serves a political function. Evaluations are commonly undertaken for the following reasons: policy development; improving the administration/management of a program; demonstrating accountability; and facilitating stakeholder participation in decision making. Evaluation studies are intended to add value for program stakeholders. If the role of the private sector is to generate profits in the context of changing market forces, the role of the public sector is to create value in the context of changing political forces. Guiding questions for an evaluation department within an aid agency include: Who are our clients? What do they want/need? How can we create value for them? How will we monitor our results? How has our program responded to the lessons being learned? Ever since the evaluation of aid programs first began in the 1960s there has been tension and controversy over using evaluation
A little over a decade ago, talk was swirling around the greater nonprofit community, particular from donors, regarding the need for developing institutional evaluation capacity. However, there seemed to be no clear vision as to the process for developing that capacity. In 2004, ECDG was formed and an ECD Toolkit was created to provide substantive guidance as to what organizations could do. More recently, a call for national evaluation capacity development (NECD) has became more pronounced. It is emanating from national governments and their multi- and bi-lateral partners and donors. When discussing national evaluation capacity, national is frequently expressed as synonymous with governmental evaluation capacity. Members of parliament are realizing the importance of evaluation; results-based management practices are becoming more established along with goal-based evaluations; and staffs in ministries are increasingly being trained in conducting evaluation. These are important initiatives but they are only part of NECD. Developing national evaluation capacity (NECD) is frequently initiated as an external endeavor. UN, other multi- and bi-lateral donors and external consultants act as the impetus and provide support for strengthening evaluation within governments. Besides government agencies, the synergy from cross-sectorial evaluation capacity between the public sector, private sector and civil society will strengthens
The Sustainable Development Goals (SDGs), otherwise known at the 2030 Agenda, is a UN initiative of 17 goals set for all nations, not just poorer ones, to aid in their efforts to end poverty, protect the planet, and ensure prosperity for all. There are targets established for each goal and an end date of 2030 to try to achieve them. Arbitrary? Of course, development, by its very nature, is an ongoing process. But focused development-oriented agendas with set targets and indicators can drive policies and programming to move towards measurable improvement. The policies and programming that countries put into place to address the SDGs should be monitored and will need to be evaluated to see if they are playing out as they were first envisioned. The SDG initiative, and its predecessor, the Millennium Development Goals (MDGs), call for nations to take charge of their own development and goal setting to achieve their development aims. This is a big deal. In 2011, ECDG sponsored an international workshop on evaluation capacity development. Two recently commissioned reports from SE Asia were presented at that time. Their findings included the following, “Evaluation practices in the studied countries were mostly donor driven. In most cases,
By popular demand the Evaluation Capacity Development Group is adapting its New Guide, published in 2013, to the task of National Evaluation Capacity Development. The publication is scheduled for release in March 2017. During the literature review process, it was found that the 2030 Agenda seems to commit the United Nations System and other multilateral institutions to actively supporting follow-up and review processes by strengthening national data systems and evaluation programmes, particularly in African countries, least developed countries, small island developing States, landlocked developing countries and middle-income countries. The harsh reality is that the Evaluation Offices of most UN agencies, programmes and funds do not contemplate National Evaluation Capacity Development in their mandates. Among those that do, resources are often a constraint. ECDG sees two alternatives: (1) the United Nations Evaluation Group (UNEG) should begin right away to manage expectations regarding National Evaluation Capacity Development; or (2) UNEG should support agencies, programmes and funds in re-examining their mandates toward National Evaluation Capacity Development.  United Nations. Transforming our World: The 2030 Agenda for Sustainable Development A/RES/70/1. Para 74.  United Nations Evaluation Group. Evaluation in the SDG era: lessons, challenges and opportunities for UNEG. Pg 61.
A few days ago, I noticed a conference promoted on XCeval. The theme caught my attention: “Measuring what matters in a ‘post-truth’ society.” A post-truth society. Fake news. We are living in strange and disturbing times. This reinforces the urgency for all of us – evaluators; policy and decision-makers; consumers; and really, everyone on the planet – to apply critical thinking skills to scrutinize and interpret information. We can discern information to make informed decisions using evaluative thinking. Triangulation is very useful for that. By applying more than one method and approach to gather information, whether qualitative or quantitative, we can better confirm what we are questioning. A more complete and rich picture of the situation can be drawn when we take the time to examine multiple sources. In tandem with questioning the information coming at us, we should be introspective, thoughtfully considering assumptions we hold and biases that we have developed which affect our world view and guide our actions. Developing our capacity for evaluative thinking enriches our understanding of the world around us. It can be applied to staff, management and governance of organizations. It also applies to citizens and those who govern them. May we all put
Scott Bayley, member of ECDG’s Advisory Group, shares a summary paper he wrote recently for staff in DFAT’s aid program. Types of Evaluation Use Early studies on the impact of evaluations were based on directly observable effects, such as a policy change or the introduction of a new program initiative. This form of utilisation is defined as instrumental use and refers to situations where an evaluation directly affects decision-making and influences changes in the program. Evidence for this type of utilisation involves decisions and actions that directly arise from the evaluation, including the implementation of recommendations. The second type is conceptual use which is more indirect and relates to ‘enlightenment’ or generating knowledge and understanding of a given area. Conceptual use refers to “the use of evaluations to influence thinking about issues in a general way.” Conceptual use occurs when an evaluation influences the way in which stakeholders think about a program, without any immediate new decisions being made about the program. Over time and given changes to the contextual and political circumstances surrounding the program, conceptual impacts can lead to instrumental impacts and hence significant program changes. Political use involves the legitimising of decisions already made about a program.
Back in February, ECDG’s website hit a glitch. WordPress updated the site’s theme that had been customized. The result was a visual nightmare with links randomly aligned on the right side of the homepage. We were able to put the pieces back together. And in the process, it allowed us to step back and take a look at layout and content. We realized that our latest publication, ECDG’s New Guide to Evaluation Capacity Development, and the supplementary training materials, were available for download while our other publications, the ECDG Toolkit and Job Design, were located in the ECD knowledgbase. They needed to be placed together. Why? These combined documents underpin ECDG’s philosophy of evaluation capacity. Each relates to a unique viewpoint – an individual, organizational, or a more complex, systemic context. The Evaluation Capacity Development Toolkit focuses on creating organizational structures within an institution to support evaluation. They include organizational design, policies, budget, and considering evaluative processes among others. The building blocks of organizational structures are the jobs that people perform. The Job Design document examines how organizations can include evaluation in the design of anyone’s job. Integrating evaluation into job descriptions contributes to meeting important individual needs while simultaneously
This report, by Plan International UK, provides valuable insight into a youth ECD initiative – child-led evaluations. “Children have a right to participate in development initiatives that affect them, as recognised in the CRC. This can foster their empowerment and strengthen their sense of agency and entitlement. It can also strengthen our understanding of local realities, as child evaluators (CEs) can obtain information that may not be easily accessed by adults working for the programme or consultants. This includes direct understanding of the effectiveness of our programme and the positive and negative changes it is bringing about in the lives of boys and girls. The ability of children to meaningfully participate, however, depends on their evolving capacity and the enabling processes put in place to ensure their genuine participation.” Kenya Full Evaluation: http://goo.gl/0HQ1Fq Executive Summary: http://goo.gl/a0Ekvu Related links: Cambodia Full evaluation: http://goo.gl/2oC42P Executive summary: http://goo.gl/2fwWCa Zimbabwe Full evaluation; http://goo.gl/SrKA49 Executive summary: http://goo.gl/mQdz32
Editorial comment: Thanks to ECDG’s Board Chair, Craig Russon, for sharing his thoughts on a results framework for the Sustainable Development Goals. I recently attended the UNDP and IDEAS-sponsored fourth International Conference on National Evaluation Capacities 28-30 October 2015 in Bangkok, Thailand. One of the main themes of the conference was the 2030 Agenda for Sustainable Development and the Sustainable Development Goals (SDGs). I am sure you have heard by now that the SDGs are an intergovernmental set of aspirations – 17 goals designed to end poverty, fight inequality and injustice, and tackle climate change by 2030. ECDG strongly supports the SDGs. We like the idea of have lofty aspirations that will help to create the world in which we all want to live. My understanding is that 169 targets, covering a broad range of sustainable development issues, have been linked to the SDGs. Furthermore, my understanding is that indicators are being developed to measure progress towards the targets. The indicators will be ready by March 2016 and there could be hundreds of them. This is somewhat of a concern to me because this kind of goes against the way that I train people on principles of Results-based Management (RBM).