“banner728.gif"

Unpacking M&E with Dr Kanyamuna: building a govt M&E system: what to do and what not to Do – Part I

ZAMBIA needs a strong and sustained government. But what is even more urgent are governance systems that are solidly established, credible, reliable, predictable and created to outlive both individual political regimes as well as individual persons. From where I stand, monitoring and evaluation (M&E) systems constitute one of the priority systems our Government of Zambia must fix as a matter of no choice. Other systems such as financial management, skills development and management, procurement management, anti- corruption, and so forth are equally important. A good M&E system is crucial and cuts across all these other systems by providing empirical evidence on what works, what does not work and further establishes reasons why.

Before I share some of the key factors pertaining to ‘what to do and what not to do’ in building a stronger government M&E system, there are three (3) defining characteristics of successful M&E systems which must be appreciated from an expert perspective. The first is intensive utilisation of the M&E information provided by the system. It may seem trite to argue that M&E information should only be collected if it is going to be used, but most evaluators in governments (and in donor agencies) have a surprisingly poor understanding of the extent to which the M&E information they produce is actually used by others. If M&E information is not being used then it is important to discover the reasons why. Is it because the M&E information is regarded as being of poor quality, or not timely, or because evaluations have not addressed the most relevant questions concerning programme performance? Or is it because the intended users within the government—such as the finance or planning ministries – have neither the skills nor interest in using this information in their work? Reliable, quality information is the second feature of successful M&E systems. There are various standards of what constitutes quality monitoring data and evaluations, and these standards can be used to assess the reliability of the information that any M&E system produces. Few (if any) government agencies have some sort of quality control mechanisms in place. Most, however, do not appear to conduct or commission formal reviews of the quality of their work. The third characteristic of a successful M&E system is sustainability. This relates to the likelihood that the M&E system will survive a change in administration or in government ministers or top officials. When the utilisation of M&E information is firmly embedded – that is, mainstreamed – in core government processes, such as the planning and budget cycles, it can be said to be institutionalised and thus is likely to be sustained over time. Conversely, when M&E has only a handful of key supporters or is little used, or if it is largely funded by donors rather than by the government itself then sustainability is less likely.

Today, I want to acknowledge and recognise that many developed and developing countries have accumulated substantial experience in building M&E systems. As with any form of capacity building, there are a number of hard-earned lessons about what works best and what does not, and so to learn from identified successes and failures others becomes success in itself. Particularly, the Zambian government just has no room to manoeuvre about, but to decide and vehemently invest in structured creation of functional M&E arrangements across all government institutions.

Lesson 1: Need for substantive government demand for M&E information. Such demand is necessary if a serious effort to build an M&E system is to be started and sustained. A significant effort is required to build an M&E system, including the creation or upgrading of data systems such as decisions about types of data to be collected, data collection methods, storage, quality control, and transmission. Equally important components of the M&E system include the training of statistical analysts; choice of evaluation tools and techniques, and their adaptation to local circumstances and priorities; training of evaluators and development of national evaluation consultants; creation of M&E offices in lead ministries and preferably in all sector ministries; training of the users of M&E information – mid-level analysts, senior officials in central and sector ministries, and possibly their ministers; and the creation of a bureaucratic infrastructure to decide which government programs should be evaluated and what issues should be addressed in each evaluation. Frankly, my caution is that this effort is not worthwhile unless the resulting M&E information is likely to be used intensively.

Lesson 2: Incentives are a key part of the demand side. Strong incentives are needed for M&E to be conducted, and for the information to be used. M&E experts often make a basic mistake by asserting that the M&E information is intrinsically a good thing and that if the information is made available then it will automatically be used. This technocratic view that M&E has inherent merit is naïve; M&E information has value only if it is reliable and if it is used intensively. Utilisation does not usually—and does not regularly—happen by chance. There need to be incentives for M&E information to be used by programme managers in their day-to-day work, by budget and planning officials responsible for advising on policy options, or by parliament responsible for accountability oversight. There are three (3) types of incentives: carrots, sticks, and sermons. An example of a carrot is the provision of greater autonomy to managers who can demonstrate (through reliable M&E information) that their programmes are performing well. An example of a stick is to set challenging (but realistic) performance targets that each ministry and programme manager is required to meet. An example of a sermon is a high-level statement of support for M&E, such as from a president or influential minister. Many of these incentives have been applied successfully in building M&E systems in developed and developing countries, though Zambia is yet to make this kind of effort and record.

Lesson 3: It helps to start with a diagnosis of what M&E functions already exist in the country—in the government, academia, and the consulting community. A diagnosis should identify the strengths and weaknesses of what exists on both the demand and supply sides. A diagnosis is really a type of evaluation, and the very process of conducting it provides an opportunity for key stakeholders within the government to become more familiar with M&E and its potential benefits to the government. A diagnosis naturally leads to an action plan to strengthen M&E. This can facilitate a coalition of support from interested sector ministries and the donor community as well as key other non-state actors.

Next week (in Part II), I will share additional crucial lessons that if expertly and committedly pursued by government, Zambia’s development path will no longer be as-it-has-been and as-it-currently-is—we shall be on the rise in public knowledge, deed and in our thrive for a transformed society. Aluta continua for a Zambia with greater accountability M&E arrangements.

Dr Vincent Kanyamuna holds a Doctor of Philosophy in Monitoring and Evaluation and is lecturer and researcher at the University of Zambia, Department of Development Studies. For comments and views, email: vkanyamuna@unza.zm

Leave a Reply

Your email address will not be published. Required fields are marked *