Drugs and Money - Prices, Affordability and Cost Containment
(2003; 158 pages) View the PDF document
Table of Contents
View the documentIntroduction
Close this folderPart I: Problems and approaches to a solution
View the documentChapter 1: Scope of the problem
View the documentChapter 2: Data needed for developing and monitoring policies
View the documentChapter 3: Policy options for cost containment of pharmaceuticals
View the documentChapter 4: Methods for monitoring and evaluating processes and outcomes
View the documentChapter 5: Making use of economic evaluation
Open this folder and view contentsPart II: Selected experiences with policy options
View the documentList of Contributors
View the documentBack cover
 

Chapter 5: Making use of economic evaluation

David McDaid, Elias Mossialos and Monique F. Mrazek

1. Introduction

Although awareness of pharmaco-economics has increased greatly, its practical use in decision making is, as we have seen in Chapter 4, at best opaque. Some of the issues will be considered in Chapter 6 in the context of the Australian experience in assessing matters of subsidy or reimbursement. This present chapter focuses on identifying barriers and potential solutions to increase the use of economic evidence in the decision making process.

Increasingly the pharmaceutical (and device) industries are using economic evidence as part of their submissions to the authorities for determining the reimbursement price of a pharmaceutical or its inclusion in a drug formulary. In part this has been a selective marketing strategy to promote the value added of a specific intervention, but more recently several countries including Australia, Canada, England, Finland, The Netherlands and Portugal have begun to introduce systems which formally link cost effectiveness to reimbursement decisions for new pharmaceuticals and, in some cases, other clinical technologies. Systems of this kind are known in the pharmacoeconomics literature as fourth hurdles or cost-effectiveness hurdles, because in effect they require pharmaceutical firms to demonstrate cost effectiveness before launch, in addition to quality, safety and efficacy, the first three hurdles ordinarily imposed by licensing authorities. Furthermore health technology assessment agencies have been established in most developed countries to provide further information on the clinical effectiveness, and in many (but not all) instances, on the economic impact of a technology [18]. Table 1 provides an overview of the situations in which economic evaluation should be expected to be helpful to decision makers.

Welcome though these developments are, evidence of the actual systematic impact of economic evaluation data on decision making remains limited [8,24]. More recently, the EUROMET study examined the use of economic evaluation in Europe and found that few decision makers made use of economic evidence [13]. A similar lack of evidence was reported in a recent European study of evaluations of health care interventions, although some ad hoc evidence of impact was observed [18]. A number of barriers against increased use of economic evidence by decision makers and practitioners are outlined in Section 2, together with some possible ways of promoting greater use of economic knowledge and tools in this field.

Table 1
Economic evaluation as an aid to decision making

- Development of treatment guidelines
- Decision making in health care organisations
- Approval decisions
- Reimbursement decisions
- Pricing decisions

 

Adapted from Johannsen [4].


A second important reason for the apparent lack of impact of economic evidence on the decision making process in health lies in the methodological difficulties in measuring impact. This is beyond the scope of this chapter but it should be remembered that, even where economic evaluations do influence a decision making process this is notoriously difficult to confirm - for instance one consequence of using economic evidence may be to do nothing, i.e. not to change current policies or practice. Further research into impact assessment is required.

2. Barriers to the use of economic evaluation data

2.1. Inadequate links between knowledge producers and decision makers

Links between the various bodies that may produce economic knowledge and those involved in decision making may be weak. An additional impediment seems to be a fragmented decision making process. Guidelines developed at the macro level may fail to play a role in decision-making at the lower levels for many reasons including inadequate dissemination, lack of professional support, lack of financial incentives or failure of political will [1]. Regardless of the management hierarchy in any country’s health care system, using economic knowledge better in decision making demands a multi-dimensional approach tackling a number of different obstacles.

Increasing the sense of ownership that decision makers have over knowledge has been shown to increase the use of such knowledge in the decision making process [15]. Elsinga and Rutten [10] demonstrated that close co-operation between researchers and policy makers in The Netherlands had promoted the use of economic appraisal in health care decision making at both the micro and macro levels. Involving decision makers in a study means that it is more likely to be relevant to their needs. A decision maker who has commissioned a pharmaco-economic study or served on its advisory committee is less likely to ignore its conclusions than if he has merely experienced it from the outside. In addition, involving these decision makers may help to ensure that the results are more widely disseminated.

Ideally all stakeholders should be involved from the outset, i.e. not just in the process of conducting the study, but also in framing the study. Researchers can be guilty of framing studies to answer research questions of limited policy relevance, whereas decision makers in commissioning research may seek answers to questions which are unlikely to be delivered within the time frame of the study; there is therefore every reason for them to work together from the planning phase onwards. One approach to that, which also builds a joint sense of ownership in the work, is to be seen in the Policy Synthesis Programme developed by the Canadian Health Services Research Foundation [4]. This programme brings researchers and policy makers together from the outset, to develop a common approach to a research question. One way in which this process seeks to overcome barriers is including both researchers and policy makers in mixed groups from the outset, rather than have both groups naturally clustering together and adopting more entrenched positions.

2.2. Lack of receptor capacity

Evidence from economic evaluation is often presented to decision makers and practitioners in a form which may be impenetrable for an individual without a background in health economic appraisal. Reports may be very long, highly technical and fail to set out clearly the policy implications of a technology or procedure. Decision-makers neither have the time nor in many instances the technical expertise to digest such reports sufficiently. This barrier may be overcome in part by producing short reports, e.g., with a single page for the main message, three pages for the executive summary, and 25 pages for the report itself [3]. One should also try to create a cadre of knowledge brokers who would act as a conduit between the worlds of research and policy making. Such knowledge brokers would possess skills in economic evaluation and communication, they would also be comfortable in a policy making environment. Their job would be to interpret economic knowledge and present it in an appropriate form to policy makers and practitioners in order to facilitate the translation of evidence into practice. The knowledge broking process goes in both directions and it places much emphasis on the reinforcement of messages, to counter the non linear nature of knowledge transfer and assimilation. It must be remembered that economic evidence is only one of the various forms of knowledge that will reach policy makers, who will also be confronted by various myths, anecdotes and truths through other media; knowledge brokers again can filter such information and help to put it all into perspective.

This is not mere theory. As well as being used by the CHSRF to help develop receptor capacity in Canada, the Swedish Health Technology Assessment Council (SBU 2001) have for many years employed such knowledge brokers as roving ambassadors to take the Council’s message - and other information - to practitioners throughout the country. These knowledge brokers can also help play a role in tackling some of the myths commonly held in the medical community about health economics, notably that it is a tool for denying individuals access to effective treatment interventions. Providing medical undergraduates with some teaching in health economics has also been advanced as a means of creating more comprehension of its relevance [17], and in the UK a leading charity, the PPP Healthcare Trust, is now funding a chair in health economics with the express remit that health economics training be provided within a medical school.

2.3. Limited acceptance of external data

Within a given country or region, there may be a shortage of researchers or funds, making it necessary to rely on economic data produced elsewhere. Such data naturally relate to the place where they were produced, and they may or may not be applicable in an area where the situation differs, for example as regards clinical practice, local health service delivery or relative prices. Judgements about whether or not they are likely to hold will be based on what are thought to be underlying similarities or differences in both biological factors and clinical treatment patterns. More often than not, it will be impossible to generalize from them, because of the marked differences between countries in their health care systems and treatment costs. Economic evaluations therefore require adjustment to take account of local treatment costs and practices; this is more valid than using multinational average cost data. To some extent economic cost data can indeed be adjusted, provided that the economic analysis has clearly distinguished between treatment costs and resources used. However this requires access to accurate local costing data, which itself may be problematical. In a recent analysis, the transferability to the French health system of foreign economic evaluations of adjuvant therapy for breast cancer was examined. None of the studies identified in the systematic review could be transferred, as costing data were not reported in a transparent manner [25]. As a result the authors recommended the international standardisation of data requirements in published economic evaluations.

Adjusting for treatment patterns is perhaps more complex. For example, a new intervention may have less clinical impact in a country where the condition is already treated intensively and greater impact in a country where no alternative therapies are available. One way of dealing with uncertainty over the effectiveness of a treatment, particularly if research capacity and resources are limited, is to use a meta analytical approach. Meta-analysis has been used in an attempt to overcome the problems in generalizing from a single Randomised Clinical Trial (RCT), especially because of a small sample size or other features of the protocol. Pulling together statistically the results of independent RCTs with the help of meta-analysis can provide a single estimate of effect for a treatment. An effective meta-analysis demands a strictly systematic review of the literature, ideally avoiding any language biases which could affect conclusions.

2.4. Methodological barriers

Most economic investigation in this area takes the form of cost effectiveness analysis, in which the costs and clinical outcomes of an intervention are compared. The incremental costs and effectiveness of one intervention compared to those of the next best alternative can provide decision makers with information on where one can most efficiently allocate scarce resources between different health care interventions. Decision makers also of course need to consider a range of factors including equity, budget impact and political preferences. However, even if efficiency alone were to be considered, it may not be reliable to rank therapies according to their incremental cost-effectiveness ratios. The dimensions of cost effectiveness analysis are sensitive to change, making comparisons of the ratio between models difficult. Where the numerator and denominator in the ratio cannot be assumed to be independent of one another, testing for statistical differences is difficult and requires the application of methods such as bootstrapping. When two treatments have equal medical costs, problems of interpretation arise, particularly when outcomes show only marginal differences or a mix of positive and negative benefits. Another problem is that most economic evaluations are based on efficacy studies carried out prior to marketing rather than on real post-marketing work in the field. An intervention may be shown to be cost effective in a trial, but if actual practice differs from the assumptions made, this product may not achieve the anticipated levels of cost effectiveness in practice. (It could of course be even more cost effective.) When economic evaluation deals with a particular population, it may fail provide a breakdown for each sub-population. Even where the cost/effectiveness ratio is poor for the total population, it may nevertheless prove to be favourable for a particular sub-group. Patient preferences for different interventions may also need to be addressed. Inconsistency in the inclusion/exclusion of indirect costs (such as those for informal caregivers) and societal costs will also affect these ratios.

While therefore cost/effectiveness analysis (CEA) is readily accepted a logical by clinicians, it remains a tool of limited usefulness for policy decision making, particularly since only interventions which have common clinical outcome measures, and similar study designs can meaningfully be compared. Cost utility analysis (CUA) overcomes this problem by estimating outcomes in a single outcome measure, quality of life, using one of a number of disease specific or generic instruments such as the EuroQOL [12] or the Health Utilities Index [27]. All the same, it remains difficult to transfer quality of life estimates from one specific context or population to another. Cost benefit analysis (CBA) measures both outcomes and costs in monetary terms, therefore allowing an intervention to be compared not only with another health care intervention, but also with any other publicly funded project. Theoretically this approach considers all costs and benefits to society as a whole, and is the most appropriate for resource allocation. However methods used in practice to elicit monetary outcomes, such as willingness to pay or accept costs, remain controversial. The validity of the actual estimates has been questioned, and clinicians are in any case particularly reluctant to accept evaluations in which health outcomes are expressed in financial terms. For such reasons, CBA is not currently recommended in various national guidelines for economic appraisal (see next section).

2.5. Limitations of economic guidelines

The variety of economic evaluation techniques available and the inconsistency in the collection and use of both cost and outcome data, can readily confuse both decision makers and practitioners, especially when they are presented with apparently contradictory conclusions from different studies of the same intervention. The standardisation of methods in economic evaluation through the use of well accepted guidelines is one way in which greater harmonisation in economic appraisal can be ensured, making it less difficult to determine whether differences between studies are due to real factors or methodological differences. This is particularly important if studies are being carried out in different settings or may be internationally biased. Standardisation may also help non-specialist decision-makers judge the quality and correctness of a published study and draw valid implications for their own environment.

The evidence on the effectiveness of guidelines is however at best weak and international harmonisation of economic guidelines has some way to go. In recent years there has been a explosion in the number of guidelines available, but differences in the approaches which are recommended remain. This is evident from Table 2, which highlights key recommendations from Australia, Canada and England.

The guidelines differ particularly in their choice of analytical technique and outcome measures. Most strikingly, cost benefit is a preferred measure in Canada, whereas it is explicitly excluded in England and positively discouraged in Australia.

Regardless of the differences in guidelines, their development has been an important step in reducing the potential of bias in industry-sponsored trials [9]. The credibility of groups performing economic evaluation may sometimes be undermined by a lack of transparency in techniques used, or by their organisational structures, i.e. they may be seen as too close to either government or industry [6]. There is often little or no accountability or quality control for economic evaluators other than academic peer review. It has been argued that introducing some form of quality auditing may help to improve the credibility and consistency of these evaluations [16,22]. Monitoring would include an assessment of the methodological competence and appropriate choice of data. It would consider the choice of assumptions, data and analytical techniques as these can bias and pervert resource allocation decision. Simply relying on passive dissemination and uptake of guidelines without policing may result in poor methodological quality and bias.

2.6. Timing of economic evaluations

Another significant barrier to the use of economic evaluations by decision-makers has been the difficulty in gaining access to relevant studies in a timely manner. When decisions about the introduction of a technology are being made it is better to provide timely data on costs and benefits rather than to disseminate this after the event [8]. This can be helped by international co-operation. In an effort to help meet the needs of decision makers, the Cochrane Collaboration was established as an international network committed to preparing, maintaining and disseminating systematic reviews of research on the effects of health care. The reviews are available electronically on the Cochrane Library. A useful website to identify other such resources is Netting the Evidence: A ScHARR Introduction to Evidence Based Practice on the Internet available at http://www.shef.ac.uk/~scharr/ir/netting/. Despite these initiatives to prepare, maintain and disseminate systematic reviews, many trials completed by pharmaceutical companies are not published and are therefore not included in systematic reviews of the evidence.

As well as timely data collection, the timely dissemination of economic evidence is also crucial. Results of economic appraisals should be disseminated sufficiently early to be capable of influencing decisions. The results of a study are likely to have greater effect if dissemination is attuned to the budgetary planning cycle. In addition, those producing evaluations should be aware of current policy issues with relevance to a specific technology [11].

Table 2

Comparative treatment of key methodological issues in national guidelines

 

Australia (Commonwealth of Australia 1999)

Canada (CCOHTA 1997)

England (NICE 2001)

Viewpoint of analysis

Societal; show impact on the drugs budget

Societal; disaggregate by other relevant viewpoints. Can also undertake financial impact analysis

NHS and Personal Social Services. Also undertake financial impact analysis

Comparator

Most frequently used alternative

Existing best practice and minimum practice

Most frequently used alternative

Source of medical evidence

Effectiveness rather than efficacy

Effectiveness rather than efficacy

Any source, but must be justified

Analytic technique

CEA encouraged, CBA discouraged

CUA or CBA preferred, although CEA acceptable

CEA or CUA only

Outcomes

Can be intermediate or long-term

For CUA include one instrument from each of three types, disease specific generic, or preference based measurement. For CBA must use a contingent valuation method, e.g., willingness to pay

Long term clinical effectiveness measured in mortality and morbidity

Incremental analysis

Required

Required

Required

Allowing for uncertainty

Sensitivity analysis required

Statistical analysis if applicable, multivariate analysis encouraged

Sensitivity analysis Required

Discounting

5% per annum for all costs and outcomes

5% per annum for all costs and outcomes

6% per annum for all costs and 1.5% per annum for benefits

Presentation of results

Structured format

Report results in disaggregated as well as aggregated data.

Use International Conference on Harmonisation guidelines, include risk estimates and sub group analysis where appropriate. Reports costs and resource use separately.

Equity

N/A

No equity weights should be used, but results should transparently indicate any equity issues

Provide information on the clinical and social status of patients most likely to benefit

2.7. Providing incentives for use of economic evaluation

As well as tackling some of the barriers to the use of economic evidence, positive incentives and mechanisms can be used. Most notable of these at the policy making level has been the increase in the use of explicit fourth hurdles in a number of countries, which explicitly link reimbursement and access to health technologies to cost effectiveness. Even where guidance was theoretically voluntary but strongly encouraged, as was the case with NICE in England [21] monitoring bodies can be used to encourage implementation. A Commission for Health Improvement now assesses whether NICE guidelines are being enforced locally; this body indeed has the power to take over the administration of local bodies if their performance is unacceptable.

Financial incentives may also be used to increase the uptake of evidence at the local level; rates of influenza vaccination in Denmark were for instance increased following the introduction of targeted payments to patients, whilst in the UK cervical cytology screening became more widely employed following the introduction of additional performance related payments based on the level of uptake achieved [19].

3. Conclusion

The facilitation of the use of economic evaluation knowledge in the policy making process is complex, and as yet there is little evidence to demonstrate that economic evaluation is used systematically in a decision making arena. There is a need to address the imbalance between research and development, but also to concentrate more resources on the active dissemination and implementation of knowledge.

There are enormous interests at stake, pharmaceutical companies are increasingly expected to continue to fund analyses of cost effectiveness as part of the reimbursement process. Yet in many instances decision makers do not have the skills (or access to researchers with skills) to objectively assess this economic evidence. This gap between knowledge producers and knowledge consumers might be met through the development of local receptor capacity, e.g., knowledge broking. The ability to assess objectively and inform the policy making process is of particular relevance given the inconsistencies in the use of economic evaluation within and between countries. Such inconsistencies extend to guidelines themselves, with several well known guidelines holding very different opinions on the role of cost effectiveness and cost benefit analysis. One possible vehicle for overcoming these difficulties may be the establishment of an international clearing house for economic evaluation, which would identify methodological differences and help to facilitate the transfer of such knowledge between different settings. These methodological issues are far from being resolved; the debate and disagreement may indeed become even more intense [23]. Furthermore there is a need to invest more resources into research and collection of data on impact assessment, since it may be the case that economic evaluation has had a significant impact in decision making. Again increasing the involvement of decision makers in the knowledge production process may increase their willingness to provide data on how decisions are made in practice. Without evidence that investment in such evaluation does help to facilitate change, there remains a danger that enthusiasm for the conduct of evaluation may wane, increasing inequities and inefficiencies in the allocation of health care resources.

References

[1] R. Busse, J.M. Graf von der Schulenburg and M. Drummond, Evaluation of cost effectiveness in primary health care (German), Zeitschrift für Ärztliche Fortbildung und Qualitatssicherung 91 (1997), 447-455.

[2] Canadian Co-ordinating Office of Health Technology Assessment, Guidelines for the Economic Evaluation of Pharmaceuticals, 2nd edn, Canada, Ottawa, 1997 November.

[3] Canadian Health Services Research Foundation, Reader-friendly writing - 1:3:25, Communication Notes, 2001. [4] Canadian Health Services Research Foundation, Progress through partnerships, Annual Report, Ottawa, 2000.

[5] Commonwealth of Australia, Department of Health and Aged Care, Guidelines for the pharmaceutical industry on preparation of submissions to the pharmaceutical benefits advisory committee: Including major submissions involving economic analyses, Revised 1999. Available from http://www.health.gov.au/pbs/pubs/pharmpac/gusubpac.htm.

[6] R. Cookson, D. McDaid and A. Maynard, Wrong SIGN, NICE Mess, British Medical Journal 323 (2001), 743-745.

[7] M.F. Drummond and L. Davies, Economic analysis alongside clinical trials: Revisiting the methodological issues, International Journal of Technology Assessment in Health Care 7(4) (1991), 561-573.

[8] M.F. Drummond, Evaluation of health technology: Economic issues for health policy and policy issues for economic appraisal, Social Science and Medicine 38 (1994), 1593-1600.

[9] M.F. Drummond, A reappraisal of economic evaluations of pharmaceuticals, PharmacoEconomics 14(1) (1998), 1-9.

[10] E. Elsinga and F.F.H. Rutten, Economic evaluation in Support of National Health Policy: The case of the Netherlands, Social Science and Medicine 45 (1997), 605-620.

[11] EUR-ASSESS group, EUR-ASSESS project subgroup report on dissemination and impact, International Journal of Technology Assessment in Health Care 13(2) (1997), 220-286.

[12] EUROQOL Group, Euro-QOL: A new facility for the measurement of health related quality of life, Health Policy 16 (1990), 199-208.

[13] C. Hoffmann and J.M. Graf von der Schulenberg, The influence of economic evaluation studies on decision making. A European survey, Health Policy 52 (2000), 179-192.

[14] M. Johannesson, Economic evaluation of drugs and its potential uses in policy making, PharmacoEconomics 8(3) (1995), 190-198.

[15] J. Lomas, Using linkage and exchange to move research into policy at a Canadian Foundation, Health Affairs 19(3) (2000), 263-240.

[16] A. Maynard, Economic evaluation techniques in healthcare: Reinventing the wheel?, PharmacoEconomics 11 (1997), 115-118.

[17] A. Maynard and T.A. Sheldon, Health economics: Has it fulfilled its potential?, in: Non-random Reflections on Health Services Research: On the 25th Anniversary of Archie Cochrane’s, A. Maynard and I. Chalmers, eds, The Nuffield Provincial Hospitals Trust, London, 1997, pp. 149-165.

[18] D. McDaid and R. Cookson, Evaluation activity in Europe, in: Analysis of the Scientific and Technical Evaluation of Health Care Interventions in the European Union, R. Cookson, A. Maynard, D. McDaid, F. Sassi and T. Sheldon, eds, Report to European Commission July 2000.

[19] D. McDaid and A. Maynard, Translating evidence into practice. The case of influenza vaccination, European Journal of Public Health 11(4) (2001), 453-455.

[20] National Institute for Clinical Excellence (NICE). Technical guidance for manufacturers and sponsors on making a submission to a technology appraisal, NICE, London, 2001.

[21] M. Rawlins, In pursuit of quality. The National Institute for Clinical Excellence, The Lancet 353(9158) (1999), 1079-1082.

[22] U.E. Reinhardt, Making economic evaluations respectable, Social Science and Medicine 45(4) (1997), 555-562.

[23] D. Rennie and H.S. Luft, Pharmacoeconomic analyses. Making them transparent, making them credible, JAMA 283 (2000), 2158-2160.

[24] F.A. Sloan, K. Whetten-Goldstein and A. Wilson, Hospital pharmacy decisions, cost containment and the use of cost-effectiveness analysis, Social Science and Medicine 45 (1997), 523-533.

[25] H.-M. Spath, M.-O. Carrere, B. Fervers and T. Philip, Analysis of the eligibility of published economic evaluations for transfer to a given health care system - Methodological approach and application to the French health care system, Health Policy 49(3) (1999), 161-177.

[26] Swedish Council on Technology Assessment in Health Care, http://www.sbu.se, accessed June 2001.

[27] G.W. Torrance, W. Furlong, D. Feeny and M. Boyle, Multi-attribute preference functions: Health utilities index, PharmacoEconomics 7(6) (1995), 503-520.

 

to previous section
to next section
 
 
The WHO Essential Medicines and Health Products Information Portal was designed and is maintained by Human Info NGO. Last updated: June 25, 2014