“banner728.gif"

Indicators are required for all levels of result-based monitoring and evaluation systems

Today, I continue to elaborate on the need for performance indicators in development effort—be it government or non-government. Every development agency requires a well formulated results framework supported with a robust indicator system.

In the absence of performance indicators, it will be impossible to determine what works, what does not work and why. Thus, setting indicators to measure progress in inputs, activities, outputs, outcomes, impacts and ultimate goals is important in providing necessary feedback to the management system. It will help managers identify those parts of an organisation or government that may, or may not, be achieving results as planned. By measuring performance indicators on a regular, determined basis, managers and decisionmakers can find out whether projects, programmes, and policies are on track, off track, or even doing better than expected against the targets set for performance. This provides an opportunity to make adjustments, correct course, and gain valuable institutional and project, programme, or policy experience and knowledge. Ultimately, of course, it increases the likelihood of achieving the desired outcomes.

Translating outcomes into outcome indicators: When we consider measuring “results”, we mean measuring outcomes, rather than only inputs and outputs. However, we must translate these outcomes into a set of measurable performance indicators. It is through the regular measurement of key performance indicators that we can determine if outcomes are being achieved. For example, in the case of the outcome “to improve student learning”, an outcome indicator regarding students might be the change in student scores on school achievement tests. If students are continually improving scores on achievement tests, it is assumed that their overall learning outcomes have also improved. Another example is the outcome “reduce at-risk behaviour of those at high risk of contracting HIV/AIDS”. Several direct indicators might be the measurement of different risky behaviours for those individuals most at risk.

As with agreeing on outcomes, the interests of multiple stakeholders should also be taken into account when selecting indicators. I previously pointed out that outcomes need to be translated into a set of measurable performance indicators. Yet how do we know which indicators to select? The selection process should be guided by the knowledge that the concerns of interested stakeholders must be considered and included. It is up to managers to distil stakeholder interests into good, usable performance indicators. Thus, outcomes should be disaggregated to make sure that indicators are relevant across the concerns of multiple stakeholder groups—and not just a single stakeholder group. Just as important, the indicators have to be relevant to the managers, because the focus of such a system is on performance and its improvement. If the outcome is to improve student learning, then one direct stakeholder group is, of course, students.

However, in setting up a results system to measure learning, education officials and governments might also be interested in measuring indicators relevant to the concerns of teachers and parents, as well as student access to schools and learning materials. Thus, additional indicators might be the number of qualified teachers, awareness by parents of the importance of enrolling girls in school, or access to appropriate curriculum materials. This is not to suggest that there must be an indicator for every stakeholder group. Indicator selection is a complicated process in which the interests of several relevant stakeholders need to be considered and reconciled. At a minimum, there should be indicators that directly measure the outcome desired. In the case of improving student learning, there must be an indicator for students. Scores on achievement tests could be that particular indicator.

The “CREAM” of good performance indicators: The “CREAM” of selecting good performance indicators is essentially a set of criteria to aid in developing indicators for a specific project, programme, or policy. Performance indicators should be clear, relevant, economic, adequate, and monitorable (CREAM). CREAM amounts to an insurance policy, because the more precise and coherent the indicators, the better focused the measurement strategies will be. More precisely—Clear (precise and unambiguous; Relevant (appropriate to the subject at hand); Economic (available at a reasonable cost); Adequate (provide a sufficient basis to assess performance); and Monitorable (amenable to independent validation),

If any one of these five criteria are not met, formal performance indicators will suffer and be less useful. Performance indicators should be as clear, direct, and unambiguous as possible. Indicators may be qualitative or quantitative. In establishing results-based M&E systems, however, I advocate beginning

with a simple and quantitatively measurable system rather than inserting qualitatively measured indicators upfront. Quantitative indicators should be reported in terms of a specific number (number, mean, or median) or percentage. “Percents can also be expressed in a variety of ways, e.g., percent that fell into a particular outcome category . . . percent that fell above or below some targeted value . . . and percent that fell into particular outcome intervals . . .”. “Qualitative indicators/targets imply qualitative assessments . . .[that is], compliance with, quality of, extent of and level of . . . .Qualitative indicators provide insights into changes in institutional processes, attitudes, beliefs, motives and behaviours of individuals”. A qualitative indicator might measure perception, such as the level of empowerment that local government officials feel to adequately do their jobs.

Further, I wish to caution that performance indicators should be relevant to the desired outcome, and not affected by other issues tangential to the outcome. The economic cost of setting indicators should be considered. This means that indicators should be set with an understanding of the likely expense of collecting and analysing the data. For example, in the Seventh National Development Plan (7NDP 2017-2021) for Zambia, there are hundreds (100s) of national and subnational level indicators spanning more than a dozen policy reform areas. Because every indicator involves data collection, reporting, and analysis, our government needs to design and build a robust individual M&E system just to assess progress toward its poverty reduction strategy. For a poor country with limited resources, this will take some doing. Likewise, in Bolivia the Poverty Reduction Strategy Paper (PRSP) initially contained 157 national-level indicators. It soon became apparent that building an M&E system to track so many indicators could not be sustained in terms of cost, time and manpower. As a way of remedying this challenge, the refined PRSPs for Bolivia contain between 17 and 25 national level indicators.

However, indicators ought to be adequate. They should not be too indirect, too much of a proxy, or so abstract that assessing performance becomes complicated and problematic. Indicators should be monitorable, meaning that they can be independently validated or verified, which is another argument in favour of starting with quantitative indicators as opposed to qualitative ones. Indicators should be reliable and valid to ensure that what is being measured at one time is what is also measured at a later time—and that what is measured is actually what is intended. Caution should also be exercised in setting indicators according to the ease with which data can be collected. Too often, agencies base their selection of indicators on how readily available the data are, not how important the outcome indicator is in measuring the extent to which the outcomes sought are being achieved. Aluta continua for a better Zambia with stronger performance measurement frameworks across public and private sectors.

Vincent Kanyamuna holds a Doctor of Philosophy in Monitoring and Evaluation and is lecturer and researcher at the University of Zambia, Department of Development Studies. For comments and views, email: vkanyamuna@unza.zm

Leave a Reply

Your email address will not be published. Required fields are marked *