- Medicine Information and Evidence for Policy > Medicines Policy
- Medicine Information and Evidence for Policy > Monitoring and Evaluation
(1999; 250 pages) [French]
The development and selection of indicators followed 10 logical steps (see box), which are described more fully below.
Selection of a conceptual framework for indicator development.
Literature review to identify potential key issues and strategies/components of pharmaceutical policy in developing countries.
Delphi survey to develop consensus on key issues and strategies/components of pharmaceutical policy in developing countries.
Experts' consultation to review general difficulties in indicator development and to define criteria for selection of indicators.
Sets of indicators for monitoring implementation of drug policy in developing countries proposed by the working group (background, structural and process indicators).
Field testing of proposed indicators in six countries to assess the clarity, applicability and usefulness of the indicators selected.
Review of the first draft manual by experts within and outside WHO to assess the methodology used for indicator development and the categories of indicators.
Set of outcome indicators to measure progress towards the overall objectives proposed by the working group.
Review of methodology for indicator calculation by epidemiologists and statisticians within and outside WHO to assess the relevance of the proposed methodologies and the appropriateness of the sampling procedures.
Finalization of the manual based on a review of all comments received and incorporation of appropriate revisions.
Definition of a conceptual framework:
A working group including people with extensive field experience, academics and a WHO/EDM staff member was set up at the Harvard School of Public Health to support WHO/EDM in developing indicators for NDPs. The first task of the group was to define a logical approach to indicator development which would serve as the conceptual framework for the subsequent activities. The various steps of this logical approach are outlined below (see box).
Conceptual framework for indicator development
• What are the key issues in the pharmaceutical sector? (diagnosis of problems)
A literature review to identify the main issues currently faced by developing countries in the pharmaceutical field was carried out by the working group at the Harvard School of Public Health. The review included both published and unpublished documents related to the pharmaceutical sector in more than 50 countries. On the basis of this review, the working group compiled a comprehensive listing of major problems faced by developing countries in the pharmaceutical sector, which were called "key issues". The working group then identified for each key issue those elements of the pharmaceutical system that have a major impact on performance; these were called "key components".
The next step was to achieve general agreement on the ranking of both the key issues and the key components in terms of importance for intervention, as a way to establish priorities. To achieve this general agreement, a Delphi survey was carried out5.
5 The Delphi technique is a method for structuring group communication so that the process allows a group of individuals, as a whole, to deal with a complex problem and reach group consensus. The process involves the use of a series of questionnaires designed by a monitor group and then sent by mail in several rounds to a respondent group of experts who remain anonymous. After each round, the results are summarized by the monitor team and used to develop a questionnaire for the next round. The summary and new questionnaire are then sent to all members who responded. A Delphi survey is considered complete when a convergence of opinion occurs or when a point of diminishing returns is reached.
The study was designed by a monitor group set up at the Harvard School of Public Health. The Delphi group consisted of 54 people with substantial expertise in pharmaceutical policy in developing countries. It included people from different types of institutions: multilateral donors, such as the World Bank and the European Union; the UN system, such as WHO and UNICEF; nongovernmental organizations; research and consulting groups; pharmaceutical companies; universities; and individual consultants specializing in drug policy implementation. Half were pharmacists or physicians, and half were economists, managers, policy analysts, anthropologists or statisticians. The group included people from 12 countries on four continents.
Through the Delphi technique, a high rate of agreement was obtained on key issues and key components. Seven key components/strategies were mentioned as priorities for action by a large majority of the Delphi respondents (see box).
The establishment of appropriate drug legislation and regulation.
The selection of essential drugs and the registration process.
The importance of maintaining a significant drug allocation in the health budget and developing a relevant financing policy in the public sector.
The improvement of drug procurement procedures in the public sector.
The strengthening of drug distribution and logistics in the public sector.
The establishment of a drug pricing policy in both public and private sectors.
The role of information and continuing education programmes to improve drug use.
The Delphi technique established these seven key components as particularly important for achieving the objectives of a national drug policy. They were therefore adopted as the basis for selecting indicators to monitor the process of implementation of pharmaceutical policy.
A major issue in indicator development is the importance of defining criteria. During an informal consultation in Geneva, a set of guiding principles and criteria was discussed and prepared to provide a common approach for indicator development in the field of pharmaceuticals6. It was agreed that indicators should be developed according to the following principles:
• Usefulness for action: The data provided in the indicator should primarily help strengthen national drug policy and programme management, and should secondarily help to promote goals and targets set up at the international level. The indicator should be useful for decision-making and action at the level where the data are collected, which can increase the reliability of data collected.
• Clarity: The indicator should express a single idea that is generally agreed to be important.
• Ease of generation and measurement: The data should, as far as possible, result from the regular data collection system. If the indicator requires an additional survey, this should be within the capability and responsibility of staff at the level it is performed.
• Consistency and validity: The indicator should be proven capable of being recorded throughout the system with an acceptable degree of validity and reliability.
• National relevance: The indicator should serve to measure progress towards the goals, objectives and targets stated in national policy.
• Ease of comparison: The indicator should, when feasible, provide quantitative data that can be compared with specific norms and objectives.
6 Development of indicators for monitoring national drug policies, Department of Essential Drugs and Medicines Policy (WHO/DAP/92.6).
Sets of indicators proposed
The working group at the Harvard School of Public Health next proposed structural and process indicators to measure the most important activities in each key component and also background information indicators for the implementation context. The initial list was reviewed with the above six criteria for indicator development, resulting in the elimination of a number of proposed indicators. The work resulted in three provisional lists of indicators: background information, structural and process indicators.
Six countries were selected for field testing the three lists of provisional indicators, with the following objectives:
• to validate the selection of indicators in various situations;
• to assess for each indicator the clarity, ease of collection, validity and usefulness for action;
• to identify other indicators that should be added, and existing indicators that were unnecessary and should be removed;
• to assess the usefulness of an indicator-based monitoring system for the implementation of national drug policy.
The six countries selected for field testing (Central African Republic, Guinea, Malawi, Nepal, Philippines and Tunisia) provided a range of national contexts and drug policies. The results of the field tests showed that policy-makers and country managers were interested in having available effective and accepted tools for carefully monitoring the implementation of their national drug policy and were eager to implement such an indicator-based system in their own country. Indicators selected were considered to be appropriate. They were easily understood, simple to apply and relatively easy to collect, although some needed special surveys. These results confirmed the relevance of the project and of the approach taken.
A preliminary analysis of the field tests took place at a two-day meeting to discuss the relevance of the indicators to each country. The meeting allowed participants to review the full lists of indicators to determine whether any should be removed or revised and to discuss whether new indicators should be added. Additional analysis of the field tests was also done by the working group, which reviewed all comments and incorporated many suggestions from the field tests.
Review of the first draft manual by experts within and outside WHO: The draft manual was sent to 60 reviewers within and outside WHO for comments on:
• the methodology used for indicator development;
• the usefulness of the categories of indicators proposed;
• the lists of indicators selected;
• the usefulness of the manual at country level.
Suggestions for outcome indicators were also requested. More than 30 reviewers sent comments, which were classified into broad categories and systematically analysed by the working group along with the results of the field test. There was general agreement on the usefulness of the manual, on the relevance of the methodology used and on the lists of indicators. Most of the comments dealt with specific indicators and with the need for a methodology to collect and calculate the indicators. A few reviewers proposed new indicators, which were added provided that they fitted the six criteria previously defined. Some reviewers commented that the indicators could also be used in developed countries.
Set of outcome indicators proposed
The working group identified a small number of outcome indicators that could measure the impact of various components on the overall objectives of national drug policy: availability and affordability of essential drugs, good quality of drugs and rational use of drugs. This included the impact of policy on the private as well as the public sector, since national policies affect both. Some indicators previously developed by WHO to assess drug use were incorporated into this list.
Review of methodology for indicator calculation
In order to assist countries in implementing the indicators, the working group prepared guidelines for the collection and analysis of the data through record reviews, interviews and surveys. These guidelines include a detailed discussion of the procedures for conducting surveys and data collection forms for central and field levels. The guidelines were revised by the working group to ensure the validity of the methodologies and the appropriateness of the sampling procedures, after careful reviews by experts in epidemiology and statistics.
Finalization of the first and second editions of the manual
The first version of the manual was prepared after further review inside and outside WHO. The indicators were then used in more than 12 countries in 1996, and the current version of the manual has been slightly modified to take into account the experience gained in these countries (footnote 1). In addition, WHO/EDM is preparing a manual on issues related to monitoring systems in the drug field which will complement this one and integrate practical experiences of countries.