Unpacking M&E with Dr Kanyamuna: the Politics of Monitoring and Evaluation

BECAUSE monitoring and evaluation (M&E) is a sincere bearer of both good news and bad news, many leaders and authorities situated at different organisational levels tend to be sceptical when it comes to investing in strengthening systems for M&E. The problem even arises when a government or organisational regime is aware that they are running affairs in scrupulous ways. Such a group of leaders will do whatever it takes not to support the strengthening of M&E systems. They would rather have weaker M&E than having stronger M&E. They would be comfortable to work with very weak of no M&E at all to hide their development sins. In many cases, they would agree that M&E was critical for growth and development of a country or organisation. But when it comes to supporting the M&E function, they will be first to curtail such efforts at all cost. They cut M&E budgets, they ensure budgets towards M&E activities are not released for implementation, and they justify why unspecialized M&E staff should continue to work as M&E personnel. They do these evil acts at will and deliberately. They want to save their selfish souls and corrupt tendencies at the expense of country and organisational growth and development.

Only a well-functioning M&E system can uphold the principles of results-orientation, iterative learning, evidence-based policy-making and accountability. Learning and feedback are one set of functions of M&E. For learning to be effective (improving policy-making, but also more broadly feeding into broader change of opinions in society), M&E is necessarily ‘uncomfortable’, highlighting both negative and positive experiences.

Apart from learning, M&E also has an accountability function. When results are not met or policies not implemented as promised this may have implications for the politicians that were engaged to deliver them. This holding accountable is one of the cornerstones of democratic societies. An adequate M&E system has to provide the necessary elements to check whether promises were lived up to or not. When this is not the case, political elites can be confronted with counterfactual evidence. In the next period, leaders will have to strengthen or reorient policies and their implementation, or relinquish power because of their failure to deliver. It has to be clear, though, that M&E cannot just be about accountability. Little progress will be made with accountability without learning. A change in power – the strongest application of accountability – will not automatically lead to policy improvement. Learning from failures, but also from successes – what works and what does not – is crucial to progress.

Those involved or subjected to M&E often experience the exercise as a punishing event, people and institutions are scared by M&E because of the negative implications it may have for their personal careers or the survival of their institutions. This limited, accountability-oriented perception narrows down the potential contributions of M&E. Whereas personal fears for accountability can be understood, one would think that the learning aspect offers nothing but positive offspring for a government as it may only contribute to strengthen governmental policies. However, certain governments may not be eager to learn from evidence. When less is known about what works and what doesn’t, political leaders maintain more room for manoeuvre to pick and choose strategic decisions.

Whereas a well-committed government may welcome constructive learning elements, less committed governments may have an interest in shielding off information on certain policies, projects and programmes. A sound M&E system counters this shielding off: it unveils all sorts of information, wanted and unwanted, beautiful and ugly. In spite of the inherent tensions between ‘political elites’ and ‘M&E information’, most attention in developing a water-tight M&E system goes to technical matters: creating a strong statistical office, installing a good data collection and management information systems, strengthening mechanisms for sound analyses, checking upon budgets, creating tracking systems et cetera. The hypothesis of my article today challenges the purely technocratic character of M&E and postulates that politics heavily intrude into M&E systems. Many decisions have to be taken in the development of M&E systems. Decision-making is by definition a political matter; therefore, it is interesting to see who takes what type of decision or attitude in the M&E framework of a country and how they are intermingled with interests and power positions. Four fields of M&E decision-making are distinguished in common M&E conceptual frameworks, i.e. i) the institutional set-up; ii) capacity; iii) identification of indicators and targets and iv) feedback.

Firstly, the institutional set-up is determined by many sub-questions such as the legal M&E framework, the mandate, the flow of information, the coordination, support and central oversight, the place of the statistical office, the role of non-state actors and the relationship between the executive and legislature, between the line and central ministries, the central and decentralised levels. The second field is that of M&E capacity: are capacities present, who receives training, on what issues? When a lack of capacity is not remedied it has of course implications for the M&E system and its functioning. The third field is the one of determining indicators and targets: what are the types, number and quality of surveys, which indicators are chosen; what are the levels of disaggregation and which targets are set? Finally, deciding upon feedback of M&E is important for its functioning and the fulfilment of its two objectives: accountability and learning. Feedback has two dimensions, not only the distribution of M&E results is important; also its usage and integration deserves specific attention: what is the linkage between policy-making, planning, budgeting and M&E, which stakeholders do what with which results, who has access
to what type of results, what are the barriers and the incentives to distribute and use results?

While ideal answers can be invented for each one of the questions above, each country has to formulate and define the grid and principles of its own system. No country has the perfect M&E system that may serve as a blueprint for all others. Nevertheless, some basic principles have to be upheld in order to avoid inhibiting the system. One such important principle is independence. Data and analysis should not be biased; if biased, they lose their function for learning and accountability. The institution implementing should thus not be the one evaluating; no one can be forced to self-incrimination. Some mechanisms can be used to increase the likelihood of independence, such as locating the evaluation function at a different level than the implementation (e.g. more centralised with an immediate feedback loop to parliament) and installing additional checks and balances (e.g. additional surveys by external actors).

While these elements could be umbrella-ed under sound technocratic governance, my caution is to demonstrate that technocratic and political governance are – in practice – not as neatly divided as in theory. When the various decisions in the four M&E fields are to be taken, politics, interests and powers, play a role. Aluta continua for a strife towards better M&E systems devoid of institutional politics and manipulation by politically inclined elites.

Dr. Vincent Kanyamuna holds a Doctor of Philosophy in Monitoring and Evaluation and is lecturer and researcher at the University of Zambia, Department of Development Studies. For comments and views, email: vkanyamuna@unza.zm

Leave a Reply

Your email address will not be published. Required fields are marked *