How to Investigate the Use of Medicines by Consumers
(2004; 98 pages) View the PDF document
Table of Contents
View the documentAcknowledgements
View the documentPreface
Open this folder and view contents1. Why study medicines use by consumers
Open this folder and view contents2. What influences medicines use by consumers
Open this folder and view contents3. How to study medicines use in communities
Open this folder and view contents4. Prioritizing and analysing community medicines use problems
Open this folder and view contents5. Sampling
Open this folder and view contents6. Data analysis
Close this folder7. Monitoring and evaluating rational medicines use interventions in the community
View the document7.1 Introduction
View the document7.2 Monitoring
View the document7.3 Evaluation
View the document7.4 Summary guidelines
View the documentBack cover
 

7.3 Evaluation

In making a good evaluation plan you should decide:

7.3.1 What to evaluate: process and/or effect?

When defining your evaluation questions you should primarily review the communication objectives. What does the intervention aim to achieve? At the end of an intervention you can measure the effects of a programme against its objectives (effect evaluation). To understand why an intervention succeeds or fails, you need to collect information about the way the intervention was conducted, a process evaluation. If an intervention was not implemented well, an effect in terms of behaviour change is not expected. It is important to find out where in the process the communication activity failed, so that improvement can be made. Below is a list of process evaluation questions by stage of the intervention, and the most commonly asked effect questions.

Process evaluation:

Preparation

1. Who conducted the intervention?

2. Why was the intervention selected? (Was the intervention based on research that identified the drug use problem confronted? Was the target audience involved in defining the solution?)

3. Was a needs assessment done?


Planning

4. What objectives were set?
5. What activities were planned?
6. What target audiences were identified?
7. Were the interventions pre-tested?
8. Was a plan made for monitoring/evaluation?


Implementation

9. Which of the planned activities were actually carried out?
10. What messages were disseminated?
11. How many people did the message reach (coverage)?
12. Did the intended audience pay attention to the message?
13. Did the intended audience understand the message, and did it convince them?
14. What problems were encountered in implementing the intervention?


Effect evaluation

15. Did the intervention result in changes in knowledge?
16. Did it result in a change in behaviour?
17. Did it lead to improvements in health?
18. Did it have any negative and/or unexpected impact?


7.3.2 Evaluation methodology

It is not so difficult to document changes in knowledge, behaviour or health. It is much more difficult to prove that the changes are caused by your intervention, and not by another factor. In selecting an evaluation design you need to consider how best you can prove the effects of your intervention.

The best way to prove change is by comparing changes in your intervention communities with changes in control communities. The controls should be similar to the intervention communities in terms of economic status, ethnicity, education, disease and medicines provision profile, and age. There are two evaluation methodologies which involve controls:

• a randomized control design: you study a population over time, assigning randomly who is exposed to the intervention and who is not

• a quasi-experimental design: you specifically select an intervention group, and identify a comparable control group.


If you cannot include controls in your study design, because of lack of resources, or for other reasons, you can evaluate by using a:

• time-series design: you collect information on your outcome measure and on factors which influence it at least three times: before the intervention, and twice after the intervention (for example, one month and six months after it). More frequent data collection both before and after the intervention improves the accuracy of such a method

• pre-post design: you collect data only twice, before and after the intervention. These are weaker designs that may not give clear results.


Figure 7 summarizes these four study designs. In all study designs it is crucial that you measure change using key outcome measures. You need to:

• review the intervention’s communication objectives

• identify in advance what behaviours are likely to change because of the intervention; and what changes in knowledge and attitudes you expect

• limit the number of outcome measures: don’t try to measure all possible changes

• measure more than one dimension. Decide whether you want to measure changes in attitudes, and/or changes in knowledge and/or changes in drug use behaviour

• choose outcome measures that can be clearly defined and reliably measured.


These designs are discussed below, see also figure 7.


Figure 7. Four evaluation designs

Randomized control design

In a randomized trial one group receives the intervention, while another group acts as a control. Random assignment is a statistical technique that can help you to ensure that the intervention group and the control group are equivalent. If the group that received the educational programme achieves a better performance than the control group, you can do a statistical test, which will provide strong scientific evidence for the success of your communication activities. The case-study from Indonesia is an example of a randomized control study.

BOX 12. SELF-LEARNING FOR SELF-MEDICATION, A CASE STUDY FROM INDONESIA

An Indonesian case-study used a randomized control design to evaluate a problem-based self-learning process in which people were taught how to extract information from package inserts of over-the-counter (OTC) medicines.

Type of intervention and its objectives: The aim of the intervention was to empower mothers to seek and critically assess information about the drugs they commonly use. Two different intervention methods were compared. The first method was to organize a large seminar on the appropriate use of OTC medicines. The second method was to organize small group (6-8 people per group) discussions, facilitated by a tutor. An activity guide, worksheets and reusable set of OTC drugs were used in the small group sessions. The specific objectives of both interventions were to help participants understand the package inserts, help them understand that several brand names have the same or similar active ingredients, and help them asses the quality of the drug information.

Evaluation methodology: The researcher recruited 112 mothers of low to moderate levels of education, and randomly assigned them to three groups. Group A received the intensive training in small groups. Group B attended the large seminar, and Group C served as control. The study aimed to measure changes in knowledge by means of a questionnaire, which was administered pre- and post-intervention; and changes in actual use of OTC medicines in a one-month period after the intervention.

Results: The study found that the score of knowledge was significantly higher, and the number of brand name products consumed in the previous month significantly lower, in the intervention group that followed the small group discussions. The researchers conclude that the problem-based self-learning approach is not only effective, but also all the mothers reported that they found the method enjoyable.

See: Suryawati S (2003).

Randomization is rare in studies that evaluate communication activities. One researcher reviewed 67 scientific articles that describe health education programmes in developing countries. He found that only four of these studies had used a randomized design (Loevinsohn, 1990). Partly this is due to lack of resources but it is also related to the way in which communication activities take place under field conditions.

A problem when opting for a randomized control design is that usually the organization implementing the intervention wants to select the groups/communities in which they pilot the intervention. The selection of communities is based on programmatic considerations; for example, communities are selected where community health workers are active, or where there is active community participation.

Quasi-experimental design

If for operational reasons you cannot choose your intervention and control groups randomly, you can use a ‘quasi-experimental’ design. For this you specifically select a control group/community which is comparable in a number of key ways to the community/group where the intervention is conducted, as in the example of Peru below.

BOX 13. AN INTERVENTION TRIAL TO DECREASE THE INAPPROPRIATE USE OF DRUGS FOR CHILDHOOD DIARRHOEA IN PERU

In Peru a study evaluated an intervention aimed at empowering carers of children to treat children with diarrhoea more appropriately.

Type of intervention and its objectives: The intervention’s objectives were to discourage the use of antidiarrhoeals and promote oral rehydration therapy (ORT) in childhood diarrhoea cases.

The interventions were developed based on results of formative research on people’s treatment of diarrhoea. This research revealed that people want a quick cure for diarrhoea. Although they were aware of the need for ORT, they did not know that most diarrhoea cases do not need drugs. The intervention aimed to: reinforce the fluid replacement strategies already practised by people in the communities; to increase awareness of the normal duration of a watery diarrhoea episode; and to increase awareness of the possibly hazardous effects of drugs.

A 15-minute “motivational” video was developed to provide information in an entertaining and persuasive way, to change widespread and deep-rooted habits, and to increase participation through subsequent talks. The video included a Mrs. Druguser, who expressed the beliefs and perceptions that previously prevailed in the community, and challenged all the appropriate treatment messages that she received. The video was used to generate debate during community meetings, which it was found to do in a positive way. The messages given in the video were reinforced by radio and printed materials. Thus the evaluation was designed to measure the effect of a mix of health education methods. It did not provide evidence on the relative contribution of each of the methods used.

Evaluation methodology: The effects of the intervention were measured by conducting a pre- and post-intervention survey of actual treatment practices in diarrhoea cases in the intervention community and in a control. The selection of the control community was based on a number of criteria relevant to the study:

• similar diarrhoea prevalence
• similar socio-economic and ethnic characteristics
• availability of national health service and of NGOs.


Data on the process of the intervention were collected during the implementation phase, the effect measurement was done in a three-month period immediately after the intervention phase. Change in health seeking behaviour was measured by a household survey in families with pre-school children on the actual treatment of diarrhoea episodes in a 15 day recall-period. Changes in knowledge and attitudes were measured by means of a structured questionnaire.

Results: Knowledge levels increased significantly in the intervention communities. Results of the household survey revealed that the overall use of medicines in childhood diarrhoea cases dropped from 43% to 32% in the intervention community and from 49% to 42% in the control community. The percentage of episodes in which carers reported giving larger amounts of liquids every day of the episode increased significantly, from 51% to 59% in the intervention community; the control community showed a slight increase, but this was not significant.

See Paredes P et al. (1997). An intervention trial to decrease the unnecessary use of drugs during childhood diarrhea. Paper presented at the International Conference on Improving Use of Medicines, Chiang-Mai, Thailand, 1997. See: http://www.who.int/dap-icium/group3pres.html

Time-series design

In some cases a study design using controls is not possible. This is the case, for example, when you implement a mass media campaign. The whole population is then reached by the intervention. Or you may lack resources to include a control group in your study. You can then evaluate your intervention using a time-series design (although it is preferable that this type of design also incorporates controls). When not using a control group you collect information on your outcome measure at least six times before and six times after the intervention. This method is descriptive and does not provide strong scientific evidence on the effectiveness of your intervention. When you have no control groups, it is especially important to look carefully at what changes have occurred, in part by increasing the number of data points, to examine trends and provide possible alternative explanations for observed changes in outcome measures. For this you need to develop a conceptual framework which lists the factors affecting your outcome measurement. By means of multivariate analysis (ask a statistician for advice) you can determine which factors (including your intervention) are correlated with the changes observed. You can also assess the effect of interventions qualitatively by interviewing the target audience on why they changed their behaviour - was it because of the interventions or were there other reasons?

The examples from Kenya (box 14 and box 15) give the results of two intervention studies using time-series designs.

Pre-post design

The pre-post design shares the same limitations as the time-series and is the lowest in scientific strength of the various experimental designs although it is commonly used in development situations. A pre-post evaluation is better than nothing but be very clear about the limitations of what it will tell you. If it is to be of any value you will need much more than numerical data to have any real idea of your intervention’s success or lack of impact. You will need to include detailed qualitative investigation related to awareness and knowledge of the message, and underlying reasons for behavioural change.

7.3.3 Problems in proving effects of interventions

It may be difficult to decide on whether change observed in outcome measures is caused by the intervention and/or to determine the real strength of the impact. This is due to a number of methodological problems:

The communication messages targeted at the intervention communities ‘contaminate’ the control groups. For example, people who live in the intervention community may be related to those living in the control community and spread the key intervention messages. Or a local radio station may decide to do a programme on the innovative community intervention, thus spreading key messages to the control groups. In that case you may observe changes in knowledge and behaviour in both control and intervention communities.


BOX 14. CHANGING HOME TREATMENT OF CHILDHOOD FEVERS BY TRAINING SHOPKEEPERS IN RURAL KENYA

This intervention, aimed to improve the treatment of childhood fevers, took place in a malaria endemic area in Kenya. Research has shown that the majority of early treatments of childhood fevers are self-medicated with shop-bought, brand name drugs. These treatments are usually incorrect or sub-optimal.

The intervention and its objectives: The aim of the intervention was to train shopkeepers who sell drugs in Kenyan communities in giving advice on the type and quantity of drugs to buy for childhood fevers, and on how to use them. The ultimate objective was to improve the use of antipyretic and antimalarial drugs in childhood fevers. Shopkeepers were trained at a series of three workshops, each lasting three days. The methods used encouraged active participation, practical training and skill development. Shopkeepers were provided with dosage charts for chloroquine and aspirin/paracetamol-based drugs, and sets of rubber stamps depicting the correct way of using chloroquine in children of different ages.

Evaluation methodology: The impact of the training programme was evaluated in two rounds of observational studies and home interviews during peak malaria seasons.

Results: Before the training workshops 32% of antimalarial sales included an adequate dose of antimalarials. After the workshops this percentage increased to 83% three months after the intervention and then to 90% seven months post-intervention. Before the training, advice was only given in 2% of antimalarial sales. This increased to 94% and 98% in the two subsequent observation rounds post-intervention. The home interviews revealed that only 4% of childhood fevers treated with chloroquine were given an adequate dose of chloroquine before the training. This increased to 65% three months after the intervention and 75% seven months later. Appropriate dispensing and safe use of aspirin also increased after the intervention. The researchers evaluated the process and found major changes in the way the shopkeepers sold their drugs and that the community viewed the changes positively.

See Marsh et al. (1999)

BOX 15. INTEGRATING RESEARCH AND EVALUATION IN KENYA

The Youth Variety Show (YVS) in Kenya, a radio call-in for young people on the subject of sexual behaviour, was guided by intensive formative and evaluative research. This included: a national baseline survey of youth and parents (6,300 interviews); focus group discussions with more than 350 adolescents and parents in 5 districts; in-depth interviews among leaders and gate keepers; a review of legislation and policy environment; content analysis of newspaper coverage of youth issues; and, once the programme started, content analysis of letters from young people. During the radio broadcast, a panel of young people and a separate panel of parents listening to the show carried out monitoring. Their critiques were used to improve the content of the next programme.

The intervention and its objectives: The intervention aimed to increase adolescent knowledge on sexual health matters, and encourage adolescents to go to reproductive health clinics for their sexual health needs.

Evaluation methodology: Evaluation was done through a follow-up household survey conducted among adults and adolescents to assess audience exposure to the YVS. This was conducted by a market research firm that carries out omnibus surveys in the commercial sector several times a year. John Hopkins University Center for Communication Programmes bought some questions as part of this ongoing survey.

Results: Results showed that 38% of respondents listened to YVS but of l5-24 year olds 55% listened. Sentinel site surveys at clinics showed that increasing numbers of adolescents attending the clinics had listened to YVS and, along with friends, YVS was the most important source of referral. Content analysis of letters and radio listener panel studies corroborated this.

Marsh VM et al. (1999) Changing home treatment of childhood fevers by training shopkeepers in rural Kenya. Tropical Medicine and International Health, 4(5):383-389.

 

The intervention changes over time. Under field conditions problems often occur in the implementation of interventions, and the key messages change over time. For example, in an evaluation of a programme on the appropriate treatment of malaria, the treatment guidelines issued by the ministry of health may change during the intervention. If that is the case, your outcome measures will have to change. This makes it difficult to describe changes in outcome measures, as the outcome measures which you used in the baseline are no longer appropriate.

The communication programme includes a mix of methods, and it is very difficult to measure the effects of each separately as they are actually designed to reinforce each other.

Other agencies start implementing interventions in the research areas. Evaluators do not own the communities they work in. Unexpectedly, other actors can decide to conduct interventions in the community. These interventions may diminish the impact of your intervention. If an intervention starts in your control communities, they may influence the case-control comparison that you intend to make.


When trying to assess effects of your intervention, you should be realistic about what changes to look for in your evaluation. Changes in knowledge and understanding might take place soon after the education input. However, changes in behaviour and health usually take longer to achieve. The Kenyan case shows that over time the effect of the intervention increased. It is a good idea to carry out a short-term evaluation fairly soon after the activity and a follow-up afterwards to look for long-term changes, as was done in the case-study.

Confounding factors need to be considered in your design

When drawing conclusions on effects of interventions it is important to consider other factors that may be responsible for the changes observed. They are known as confounding factors. An example can help explain why.

BOX 16. CONFOUNDING FACTORS

Let’s assume that they are evaluating the effects of intensive training on the use of pre-packaged oral rehydration solution (ORS), by comparing ORS use in community A with use in a control community. Evaluation results reveal that ORS is often unavailable in the government health centre that services community B (where intensive education on the preparation of ORS did not occur); while in community A, the health workers of the NGO primary health programme ensure a regular supply of ORS to the community.

In the analysis of the drug use patterns, the evaluators find that people in community A use ORS more often in the treatment of pre-school diarrhoea than people in community B after the intervention. Is this the result of the intensive health education or is it related to changes in availability of ORS? A more qualitative evaluation of the intervention process in community A can help in assessing its effects. The evaluators, realizing that ORS availability is a confounding factor, should collect data on ORS availability before, during and after the intervention in both communities. The evaluators can further compare ORS use in the families of women who attended the health education sessions with those who did not attend. If there is a difference in use of ORS between these groups, then clearly the health education intervention makes a difference. Also the evaluators can use qualitative information collected among women who attend the health education sessions. If the messages given are understood by them, and if they themselves indicate that the training in ORS use has encouraged them to use ORS more often in childhood diarrhoea, then we can suggest that the health education played an important role. If the results of the study further indicate that women in community B have less knowledge on the use of ORS, then this conclusion is strengthened.*

* This case deals with health education on the use of ORS packages. If the health education input explains how people can make their own oral rehydration solution with sugar and salt, then the supply of ORS is of course less important as a contextual factor.

It is important to think about possible confounding factors before you conduct the intervention, so that you collect information on these variables in your baseline study. If you fail to do so, it may be very difficult to assess the effects of your intervention.

What data collection methods will you use?

In addressing the main objectives and the specific evaluation questions of the evaluation phase, evaluators can use a combination of research methods, similar to the approach chosen in a rapid assessment exercise. The methods you choose will depend on your evaluation question and design. The following methods are useful:

review of project documents: records of monitoring activities can be very helpful for many of the process questions given above. These documents include workplans, minutes of meetings, workshop reports, notes from discussion with target audiences in pre-testing activities, interview guides, training and other printed materials etc.

semi-structured interviews with staff and those responsible for managing and conducting the intervention. These interviews give you an insider’s view of the intervention process.

semi-structured interviews and focus group discussions with representatives of the target audiences. These interviews can answer questions such as:

- whether respondents are aware of the intervention
- whether they can recall the messages and information promoted
- whether they like or approve of the messages and activities
- whether they believe the messages
- whether they follow the advice given.


short quantitative surveys on awareness of the information campaign. Such a survey can give quantitative data on the same questions used in the semi-structured interviews (see above).

focused weekly illness recalls to measure changes in drug use patterns. In interventions oriented towards the appropriate treatment of illnesses, quantitative data on drug use patterns by means of focused illness recall can be collected. This involves a short questionnaire to be administered to all people in the target audience who suffered the illness that is the focus of the intervention, in the previous week. An example is the survey done in the case-study from Peru (note that that survey used a 15 day recall period which is relatively long. It is better to use a one week recall period).

structured observations can be used to evaluate the conduct of interventions. Observers can check if key messages are covered in training sessions, if the target audience listened attentively, and how many participants attended. Structured observations can also be used to evaluate changes in behaviour, as was done in the shopkeeper intervention discussed above.


7.3.4 Developing key outcome measures

One of the most challenging steps in an evaluation is the development of key outcome measures. These need to be directly related to your communication objectives. You need to do this in the planning stage of your intervention, as that is when you will collect baseline data. This will be explained in more detail in the manual How to improve medicine use by consumers. Try to limit the number of measures to those which show key aspects of your intervention. They should measure effects which are achievable. And collecting data to measure them should be feasible. Examples are given in the case studies above.

For the Peruvian evaluation a key outcome measure was:

the percentage of childhood diarrhoea cases treated with antidiarrhoeal medicines This was a key measure, as the intervention aimed at reducing the use of medicines in the treatment of diarrhoea.


In the shopkeeper intervention in Kenya, key measures were:

the percentage of total antimalarial sales which included an adequate dose of antimalarial drugs, and

the percentage of childhood fever cases treated with chloroquine in which a full dose of chloroquine was given.


These measures are directly related to the key communication objectives of the interventions.

When describing the key outcome measures in your evaluation plan, you should describe for each:

• its purpose: why you are measuring this, in relation to the intervention’s communication objectives

• the method that will be used to collect data for it

• the way in which the indicator is calculated.


For example:

The percentage of total antimalarial sales which included an adequate dose of antimalarial drugs


Purpose: One of the main aims of the shopkeeper intervention is to teach shopkeepers to inform clients of the need for an adequate dose of antimalarial drugs. This measure calculated to what extent the client actually buys such a full dose.

Data-collection method: Data are collected by means of observation in the shops, three months and seven months post-intervention. Observation is done in all the 23 shops with shopkeepers who received training. In each shop 10 drug purchases are observed. The observation forms included information on type of medicine sold, the patient’s age and the dosage of the medicine given.

Calculation: A percentage is calculated by dividing the total number of purchases in which an adequate dose of antimalarial was given by the total number of antimalarial transactions.

7.3.5 Enhancing participation of the target audiences

Evaluations are often done by outside experts, as they are considered to be objective and have the necessary expertise to assess the effect of an intervention. An argument for conducting the evaluation in a participatory manner is that local staff and beneficiaries of programmes are more likely to increase their commitment to the programme’s success if they are involved in the evaluation process. Moreover, they have significant knowledge about programme implementation, relevant views on the strengths and weaknesses of the interventions, and insights on the contextual factors that affect the interventions. By involving local staff and beneficiaries in the evaluation process the evaluation is, therefore, likely to be more appropriate and the results more valid.

However, in developing the plan for the evaluation phase, the evaluators should realize that not all aspects can be conducted in a participatory fashion. It is best to involve the local actors in evaluating interventions that they themselves are actively involved in. For example, mothers can be asked to participate in the evaluation of health education sessions that they regularly attend; and community health workers can be asked to participate in the evaluation of the training that they receive.

to previous section
to next section
 
 
The WHO Essential Medicines and Health Products Information Portal was designed and is maintained by Human Info NGO. Last updated: December 1, 2019