Health technology assessment : HTA
-
Health Technol Assess · Jan 2001
ReviewStatistical assessment of the learning curves of health technologies.
(1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. ⋯ There was a hierarchy of methods for the identification and measurement of learning, and the more sophisticated methods for both have had little if any use in health technology assessment. This demonstrated the value of considering fields outside clinical research when addressing methodological issues in health technology assessment. CONCLUSIONS - TESTING OF STATISTICAL METHODS: It has been demonstrated that the portfolio of techniques identified can enhance investigations of learning curve effects. (ABSTRACT TRUNCATED)
-
Health Technol Assess · Jan 2001
Meta AnalysisEliciting public preferences for healthcare: a systematic review of techniques.
Limited resources coupled with unlimited demand for healthcare mean that decisions have to be made regarding the allocation of scarce resources across competing interventions. Policy documents have advocated the importance of public views as one such criterion. In principle, the elicitation of public values represents a big step forward. However, for the exercise to be worthwhile, useful information must be obtained that is scientifically defensible, whilst decision-makers must be able and willing to use it. ⋯ The methods identified were classified as quantitative or qualitative. RESULTS - QUANTITATIVE TECHNIQUES: Quantitative techniques, classified as ranking, rating or choice-based approaches, were evaluated according to eight criteria: validity; reproducibility; internal consistency; acceptability to respondents; cost (financial and administrative); theoretical basis; whether the technique offered a constrained choice; and whether the technique provided a strength of preference measure. Regarding ranking exercises, simple ranking exercises have proved popular, but their results are of limited use. The qualitative discriminant process has not been used to date in healthcare, but may be useful. Conjoint analysis ranking exercises did well against the above criteria. A number of rating scales were identified. The visual analogue scale has proved popular within the quality-adjusted life-year paradigm, but lacks constrained choice and may not measure strength of preference. However, conjoint analysis rating scales performed well. Methods identified for eliciting attitudes include Likert scales, the semantic differential technique, and the Guttman scale. These methods provide useful information, but do not consider strength of preference or the importance of different components within a total score. Satisfaction surveys have been frequently used to elicit public opinion. Researchers should ensure that they construct sensitive techniques, despite their limited use, or else use generic techniques where validity has already been established. Service quality (SERVQUAL) appears to be a potentially useful technique and its application should be researched. Three choice-based techniques with a limited application in healthcare are measure of value, the analytical hierarchical process and the allocation of points technique, while those more widely used, and which did well against the predefined criteria, include standard gamble, time trade-off, discrete choice conjoint analysis and willingness to pay. Little methodological work is currently available on the person trade-off. RESULTS - QUALITATIVE TECHNIQUES: Qualitative techniques were classified as either individual or group-based approaches. Individual approaches included one-to-one interviews, dyadic interviews, case study analyses, the Delphi technique and complaints procedures. Group-based methods included focus groups, concept mapping, citizens' juries, consensus panels, public meetings and nominal group techniques. Six assessment criteria were identified: validity; reliability; generalisability; objectivity; acceptability to respondents; and cost. Whilst all the methods have distinct strengths and weaknesses, there is a lot of ambiguity in the literature. Whether to use individual or group methods depends on the specific topic being discussed and the people being asked, but for both it is crucial that the interviewer/moderator remains as objective as possible. The most popular and widely used such methods were one-to-one interviews and focus groups. (ABSTRACT TRUNCATED)
-
Health Technol Assess · Jan 2001
ReviewSubgroup analyses in randomised controlled trials: quantifying the risks of false-positives and false-negatives.
Subgroup analyses are common in randomised controlled trials (RCTs). There are many easily accessible guidelines on the selection and analysis of subgroups but the key messages do not seem to be universally accepted and inappropriate analyses continue to appear in the literature. This has potentially serious implications because erroneous identification of differential subgroup effects may lead to inappropriate provision or withholding of treatment. ⋯ While it is generally recognised that subgroup analyses can produce spurious results, the extent of the problem is almost certainly under-estimated. This is particularly true when subgroup-specific analyses are used. In addition, the increase in sample size required to identify differential subgroup effects may be substantial and the commonly used 'rule of four' may not always be sufficient, especially when interactions are relatively subtle, as is often the case. CONCLUSIONS--RECOMMENDATIONS FOR SUBGROUP ANALYSES AND THEIR INTERPRETATION: (1) Subgroup analyses should, as far as possible, be restricted to those proposed before data collection. Any subgroups chosen after this time should be clearly identified. (2) Trials should ideally be powered with subgroup analyses in mind. However, for modest interactions, this may not be feasible. (3) Subgroup-specific analyses are particularly unreliable and are affected by many factors. Subgroup analyses should always be based on formal tests of interaction although even these should be interpreted with caution. (4) The results from any subgroup analyses should not be over-interpreted. Unless there is strong supporting evidence, they are best viewed as a hypothesis-generation exercise. In particular, one should be wary of evidence suggesting that treatment is effective in one subgroup only. (5) Any apparent lack of differential effect should be regarded with caution unless the study was specifically powered with interactions in mind. CONCLUSIONS--RECOMMENDATIONS FOR RESEARCH: (1) The implications of considering confidence intervals rather than p-values could be considered. (2) The same approach as in this study could be applied to contexts other than RCTs, such as observational studies and meta-analyses. (3) The scenarios used in this study could be examined more comprehensively using other statistical methods, incorporating clustering effects, considering other types of outcome variable and using other approaches, such as Bootstrapping or Bayesian methods.
-
Surgical adverse events contribute significantly to postoperative morbidity, yet the measurement and monitoring of events is often imprecise and of uncertain validity. Given the trend of decreasing length of hospital stay and the increase in use of innovative surgical techniques--particularly minimally invasive and endoscopic procedures--accurate measurement and monitoring of adverse events is crucial. ⋯ The use of standardised, valid and reliable definitions is fundamental to the accurate measurement and monitoring of surgical adverse events. This review found inconsistency in the quality of reporting of postoperative adverse events, limiting accurate comparison of rates over time and between institutions. The duration of follow-up for individual events will vary according to their natural history and epidemiology. Although risk-adjusted aggregated rates can act as screening or warning systems for adverse events, attribution of whether events are avoidable or preventable will invariably require further investigation at the level of the individual, unit or department. CONCLUSIONS - RECOMMENDATIONS FOR RESEARCH: (1) A single, standard definition of surgical wound infection is needed so that comparisons over time and between departments and institutions are valid, accurate and useful. Surgeons and other healthcare professionals should consider adopting the 1992 Centers for Disease Control (CDC) definition for superficial incisional, deep incisional and organ/space surgical site infection for hospital monitoring programmes and surgical audits. There is a need for further methodological research into the performance of the CDC definition in the UK setting. (2) There is a need to formally assess the reliability of self-diagnosis of surgical wound infection by patients. (3) There is a need to assess formally the reliability of case ascertainment by infection control staff. (4) Work is needed to create and agree a standard, valid and reliable definition of anastomotic leak which is acceptable to surgeons. (5) A systematic review is needed of the different diagnostic tests for the diagnosis of DVT. (6) The following variables should be considered in any future DVT review: anatomical region (lower limb, upper limb, pelvis); patient presentation (symptomatic, asymptomatic); outcome of diagnostic test (successfully completed, inconclusive, technically inadequate, negative); length of follow-up; cost of test; whether or not serial screening was conducted; and recording of laboratory cut-off values for fibrinogen equivalent units. (7) A critical review is needed of the surgical risk scoring used in monitoring systems. (8) In the absence of automated linkage there is a need to explore the benefits and costs of monitoring in primary care. (9) The growing potential for automated linkage of data from different sources (including primary care, the private sector and death registers) needs to be explored as a means of improving the ascertainment of surgical complications, including death. This linkage needs to be within the terms of data protection, privacy and human rights legislation. (10) A review is needed of the extent of the use and efficiency of routine hospital data versus special collections or voluntary reporting.
-
Clinical guidelines, defined as 'systematically developed statements to assist both practitioner and patient decisions in specific circumstances', have become an increasingly familiar part of clinical care. Guidelines are viewed as useful tools for making care more consistent and efficient and for closing the gap between what clinicians do and what scientific evidence supports. Interest in clinical guidelines is international and has its origin in issues faced by most healthcare systems: rising healthcare costs; variations in service delivery with the presumption that at least some of this variation stems from inappropriate care; the intrinsic desire of healthcare professionals to offer, and patients to receive, the best care possible. Within the UK, there is ongoing interest in the development of guidelines and a fast-developing clinical-effectiveness agenda within which guidelines figure prominently. Over the last decade, the methods of developing guidelines have steadily improved, moving from solely consensus methods to methods that take explicit account of relevant evidence. However, UK guidelines have tended to focus on issues of effectiveness and have not explicitly considered broader issues, particularly cost. This report describes the methods developed to handle benefit, harm and cost concepts in clinical guidelines. It reports a series of case studies, each describing the development of a clinical guideline; each case study illustrates different issues in incorporating these different types of evidence. ⋯ The focus of this project was to explore the methods of incorporating cost issues within clinical guidelines. However, the process of reviewing evidence in guideline development groups is becoming increasingly sophisticated, not only in considerations of cost but also in review techniques and group process. At the outset of the project it was unclear how narrowly or broadly the concept of 'cost' could be considered. (ABSTRACT TRUNCATED)