In ECDG’s opinion, the UN is faced with a dilemma with regard to monitoring and evaluating the Sustainable Development Goals. On the one hand, the rhetoric surrounding the SDGs is that “Business as usual” for the UN is not an option any more. On the other hand, the UN appears to be hanging on to its strong commitment to Results-Based Management (RBM). For ECDG, results-based management IS business as usual. The UN seems to be trying to finesse the dilemma by making statements about the need for complexity and systems thinking . . . in an RBM environment. In other words, the UN wants to have its cake and eat it, too. In ECDG’s view, RBM may be inimical to complexity and systems thinking. Why? If one reviews the professional literature, complexity is not seen as being a monolithic concept. In other words, there are different types of complexity. Detail complexity is the sort of complexity in which there are many variables. This type of complexity is well suited to measuring the 17 goals, 169 targets and 244 indicators with approaches that are consistent with RBM. The other type is dynamic complexity. In this type of complexity, cause and effect
The use of nice-sounding jargon often serves the purpose – inadvertently one would hope – of obfuscating the real issue. The UN is far from immune from this practice. The Advisory Note on Follow-up and Review of the SDGs that advises member States to “optimize their national statistics systems” is a good case in point. Optimise for what purpose the reader may ask. Presumably for SDG reporting, given the context of the advisory note. If so, the advise is both wrong and harmful. With due respect to the importance of the SDGs, national statistical systems have a much broader and more long term objective. Their task is to meet the individual countries’ need for quantitative information in a wide range of areas, including demographic, economic and social development, employment, environment etc. To achieve this task requires a considerable institutional capacity as well as a long-term systematic approach to collection and generation of statistics and, not least, continuity in terms of what statistics are collected, how, when and use of terminology, definitions and classifications. Most developing countries are still in the process of building up this capacity and continuity. One of the main obstacles they face is the habit of international
In the Advance Unedited Version of the Secretary-General’s Report to the Economic and Social Council, paragraph 34 states: The 2030 Agenda was deliberately designed to be comprehensive and integrated. Together with the complexity of the challenges at the country level, it demands UN development system entities to work closely together and pool expertise. It also requires a new and more integrated approach to capacity building of national institutions – private and public – especially for SDG planning, monitoring, evaluation and implementation. Yet the system still lacks a common methodology or standards for capacity development. There are many competing ideas on how to build the evaluation capacity of national institutions so that they can carry out their mandate to monitor and evaluate progress towards the SDGs. ECDG is of the opinion that the theoretical framework that offers the best prospect for success is Organisation Development (OD). According to French and Bell, OD is the applied behavioural science discipline dedicated to improving organisations and the people in them through the use of the theory and practice of planned change. ECDG believes that the international community should not have as a goal to make everybody in national institutions an evaluation expert. The goal
This blog deals with the paradox of evaluating progress towards the Sustainable Development Goals (SDGs). Being goals, one might naturally think that the goal-based approach would be most appropriate. Paradoxically, however, ECDG believes that goal-based evaluation may not be the best approach to evaluate the SDGs. ECDG believes that evaluation which incorporates the principles of systems thinking would be a better choice. There are two ways that this could be accomplished. The first is that the object of evaluation could be conceptualized in terms of systems. In this instance, Member States could be thought of as being systems that are composed of elements such as government, civil society, the private sector, academia, and national evaluation organisations. The structure of the system will, in large measure, determine the types of initiatives that can be undertaken. And, it is the initiatives that will, ultimately, determine progress towards the SDGs. The progress towards the SDGs could be evaluated using traditional methods. Second, progress towards the SDGs could be evaluated by using approaches that are actually based upon system’s thinking. For example, ECDG has written a guidebook that attempts to adapt Peter Checkland’s Soft System’s Methodology to the task of developing national evaluation capacity.
The Office of Internal Oversight Services (OIOS) recently produced an Advisory Note on Follow-up and Review of the SDGs: https://oios.un.org/resources/2017/09/FcdUD2er.pdf. Some in the United Nations system have lauded the Note as being useful. However, ECDG has serious concerns about certain aspects of the advice that is provided. Thank goodness that it is non-binding. According to the note, “- many countries will require substantial statistical capacity building support from the UN”. The note goes on to state that “. . . it is also important to note that – when it comes to national reporting – Member States are not obliged to take into account any data generated through mechanisms that are additional or parallel to their own national statistical services.” ECDG believes that it would be important to apply principles of systems thinking to the follow-up and review of the SDGs. One principle that is relevant in this instance is the paradox of local optimization. This paradox occurs when part of a system is optimized at the expense of other elements of the system. In this instance, the note appears to advise Member States to optimize their national statistics systems . . . and to sub-optimize other forms of M&E
Thanks to ECDG’s Advisory Group member, Scott Bayley, for sharing this thoughtful paper on leadership’s critical role in support of evaluation. Senior leaders have many opportunities to demonstrate their support for EVALUATION (using evidence to inform decision making at all stages of the program management cycle to drive continuous improvement): By making statements advocating the benefits of EVALUATION. By consistently modelling/demonstrating a commitment to EVALUATION in their own work. Actively seeking performance feedback and using this to drive continuous improvement. Leaders need to give performance matters their constant attention. Personal attention at the operational level. Through their support for organizational EVALUATION policies, systems, guidelines, tools, and actual practices/culture. By championing an explicit EVALUATION capacity building program that is adequately resourced, implemented and monitored/evaluated. By initiating engagement with key external stakeholders and building a consensus about performance expectations. By publically participating in EVALUATION training and coaching activities. By taking EVALUATION considerations into account when making resource allocation decisions. Supporting the creation of operational communities of practice. Through the questions that senior leaders ask of staff and colleagues at meetings. Do conversations focus on undertaking activities/disbursing $ or on achieving/improving results? Through their expectations of their own staff. By emphasizing learning and improvement,
Thanks to ECDG’s Advisory Group member, Scott Bayley, for this discussion-provoking blog on a comparison of two primary though often incompatible evaluation purposes: accountability and learning. Evaluation can be defined as the systematic empirical examination of a program’s design, implementation and impacts with a view to judging a program’s merit, worth and significance. Evaluation reduces uncertainty for stakeholders, aids decision making, and serves a political function. Evaluations are commonly undertaken for the following reasons: policy development; improving the administration/management of a program; demonstrating accountability; and facilitating stakeholder participation in decision making. Evaluation studies are intended to add value for program stakeholders. If the role of the private sector is to generate profits in the context of changing market forces, the role of the public sector is to create value in the context of changing political forces. Guiding questions for an evaluation department within an aid agency include: Who are our clients? What do they want/need? How can we create value for them? How will we monitor our results? How has our program responded to the lessons being learned? Ever since the evaluation of aid programs first began in the 1960s there has been tension and controversy over using evaluation
A little over a decade ago, talk was swirling around the greater nonprofit community, particular from donors, regarding the need for developing institutional evaluation capacity. However, there seemed to be no clear vision as to the process for developing that capacity. In 2004, ECDG was formed and an ECD Toolkit was created to provide substantive guidance as to what organizations could do. More recently, a call for national evaluation capacity development (NECD) has became more pronounced. It is emanating from national governments and their multi- and bi-lateral partners and donors. When discussing national evaluation capacity, national is frequently expressed as synonymous with governmental evaluation capacity. Members of parliament are realizing the importance of evaluation; results-based management practices are becoming more established along with goal-based evaluations; and staffs in ministries are increasingly being trained in conducting evaluation. These are important initiatives but they are only part of NECD. Developing national evaluation capacity (NECD) is frequently initiated as an external endeavor. UN, other multi- and bi-lateral donors and external consultants act as the impetus and provide support for strengthening evaluation within governments. Besides government agencies, the synergy from cross-sectorial evaluation capacity between the public sector, private sector and civil society will strengthens
The Sustainable Development Goals (SDGs), otherwise known at the 2030 Agenda, is a UN initiative of 17 goals set for all nations, not just poorer ones, to aid in their efforts to end poverty, protect the planet, and ensure prosperity for all. There are targets established for each goal and an end date of 2030 to try to achieve them. Arbitrary? Of course, development, by its very nature, is an ongoing process. But focused development-oriented agendas with set targets and indicators can drive policies and programming to move towards measurable improvement. The policies and programming that countries put into place to address the SDGs should be monitored and will need to be evaluated to see if they are playing out as they were first envisioned. The SDG initiative, and its predecessor, the Millennium Development Goals (MDGs), call for nations to take charge of their own development and goal setting to achieve their development aims. This is a big deal. In 2011, ECDG sponsored an international workshop on evaluation capacity development. Two recently commissioned reports from SE Asia were presented at that time. Their findings included the following, “Evaluation practices in the studied countries were mostly donor driven. In most cases,
By popular demand the Evaluation Capacity Development Group is adapting its New Guide, published in 2013, to the task of National Evaluation Capacity Development. The publication is scheduled for release in March 2017. During the literature review process, it was found that the 2030 Agenda seems to commit the United Nations System and other multilateral institutions to actively supporting follow-up and review processes by strengthening national data systems and evaluation programmes, particularly in African countries, least developed countries, small island developing States, landlocked developing countries and middle-income countries. The harsh reality is that the Evaluation Offices of most UN agencies, programmes and funds do not contemplate National Evaluation Capacity Development in their mandates. Among those that do, resources are often a constraint. ECDG sees two alternatives: (1) the United Nations Evaluation Group (UNEG) should begin right away to manage expectations regarding National Evaluation Capacity Development; or (2) UNEG should support agencies, programmes and funds in re-examining their mandates toward National Evaluation Capacity Development.  United Nations. Transforming our World: The 2030 Agenda for Sustainable Development A/RES/70/1. Para 74.  United Nations Evaluation Group. Evaluation in the SDG era: lessons, challenges and opportunities for UNEG. Pg 61.