EFFECTIVE AND COST-EFFECTIVE MEASURES TO REDUCE ALCOHOL MISUSE IN SCOTLAND: AN UPDATE TO THE LITERATURE REVIEW
SECTION TWO: METHODS
This section concerns the methods used in updating the literature review. It covers:
issues concerning the search strategy and quality assessment;
some of the relevant statistical methods; and
the interpretation of cost-effectiveness studies.
2.1 As in the original review, this update is based on reviews of effectiveness and individual economic evaluation studies. This reflects the relative size of the two types of literature.
2.2 The search strategies and databases used replicated the searches carried out for the original review. (See Ludbrook et al 2002 pp11-12 and pp15-16). Databases were searched from 2000 in order to overlap the period of the previous review. This would identify any references that might have been missed through late entry into the databases. Studies were included if they were effectiveness reviews of specific interventions or economic evaluations of interventions and in English. In addition, some studies that did not meet the inclusion criteria were retained if they related to areas of interest where no reviews had been identified. The time-scale of the study did not permit further hand searching or comprehensive follow up of references from the retrieved literature.
2.3 The quality of the effectiveness reviews and the economic evaluation studies was assessed using the same criteria as the original review (see Ludbrook et al p12 and p16).
Statistical methods used within reviews
2.4 The reviews in this report include qualitative summaries of literature, descriptions of results that have been found in the literature and statistical summaries of the findings, using meta-analysis. Some studies report results in terms of effect size (measured as the difference between the intervention and control group means, divided by the pooled standard deviation). This is a valid method for determining whether the intervention has had a statistically significant impact. However, it is not always possible to provide a meaningful interpretation of the effect size without reference to the original study data. Where reviews have carried out a quantitative analysis of such studies, the pooled results are reported in terms of the weighted mean effect size; each effect size is weighted by the inverse of its variance. This process gives greater weight to larger samples with more precise results.
2.5 Study results can be more easily understood when the results have been reported in terms of the change in the outcome variable of interest; for example, the reduction in units of alcohol consumed or the increase in abstinence rates. Another method of reporting results is the odds ratio, which is the likelihood of observing an outcome for the intervention group compared with the comparison group. An odds ratio of 1 reflects no difference between the groups. An odds ratio of 2 indicates the outcome is twice as likely.
2.6 The statistical significance of the findings refers to the possibility that differences in the intervention and comparison groups are observed by chance. A result is referred to as being statistically significant when the probability of the result occurring by chance falls below some threshold, usually 5%. Alternatively, this information can be presented in terms of a confidence interval (CI), usually 95%. This gives a range around the estimated value within which the true value is expected to lie. There is only a 5% chance that the true value lies outside a 95% CI.
2.7 Economic evaluation involves building upon effectiveness information to assess both the costs of delivering the different policies or interventions and also assessing a wide range of consequences. Local conditions can influence the value of costs and consequences, especially between countries, and this should be taken into account when considering the relevance of findings to Scotland.
2.8 The application of economic evaluation techniques involves making a number of assumptions and, generally, individual studies undertake a range of sensitivity analysis to test the robustness of their findings to changes in these assumptions. Synthesising evidence on cost-effectiveness is not as straightforward as for effectiveness reviews nor are there well-developed techniques. There do exist, however, a number of checklists to assess the quality of individual studies.
2.9 As with the previous review, there are very few good quality economic evaluations. Many studies have omitted major costs or consequences. The evidence that can be drawn from such studies is, therefore, of a very different quality from that which can be taken from a well-conducted systematic review. In general, the lessons drawn illustrate some of the issues that will impact on cost-effectiveness rather than lead to any ranking between interventions.
2.10 The range of costs and consequences relevant to the assessment of interventions to reduce alcohol misuse and the different forms of economic evaluation (cost-offset studies or cost analysis; cost-effectiveness analysis; cost-utility analysis; cost-benefit analysis) were set out in Ludbrook et al 2002 pp18-19.