-
Health Technol Assess · Feb 2006
Review Case Reports Comparative StudyComparison of conference abstracts and presentations with full-text articles in the health technology assessments of rapidly evolving technologies.
- Y Dundar, S Dodd, R Dickson, T Walley, A Haycox, and P R Williamson.
- Liverpool Reviews and Implementation Group, University of Liverpool, UK.
- Health Technol Assess. 2006 Feb 1;10(5):iii-iv, ix-145.
ObjectivesTo assess the extent of use of data from conference abstracts and presentations in health technology assessments (HTAs) provided as part of the National Institute for Health and Clinical Excellence (NICE) appraisal process. Also to assess the methodological quality of trials from conference abstracts and presentations, the consistency of reporting major outcomes between these sources and subsequent full-length publications, the effect of inclusion or exclusion of data from these sources on the meta-analysis pooled effect estimates, and the timeliness of availability of data from these sources and full articles in relation to the development of technology assessment reviews (TARs).Data SourcesA survey of seven TAR groups. An audit of published TARs: included all NICE TARs published between January 2000 and October 2004. Case studies of selected TARs.Review MethodsAnalyses of the results of the survey and audit were presented as a descriptive summary and in a tabular format. Sensitivity analyses were carried out to compare the effect of inclusion of data from abstracts and presentations on the meta-analysis pooled effect estimates by including data from both abstracts/presentations and full papers, and data from only full publications, included in the original TAR. These analyses were then compared with meta-analysis of data from trials that have subsequently been published in full.ResultsAll seven TAR groups completed and returned the survey. Five out of seven groups reported a general policy that included searching for and including studies available as conference abstracts/presentations. Five groups responded that if they included data from these sources they would carry out methodological quality assessment of studies from these sources using the same assessment tools as for full publications, and manage the data from these sources in the same way as fully published reports. All groups reported that if relevant outcome data were reported in both an abstract/presentation and a full publication, they would only consider the data in the full publication. Conversely, if data were only available in conference abstract/presentation, all but two groups reported that they would extract and use the data from the abstract/presentation. In total, 63 HTA reports for NICE were identified. In 20 of 63 TARs (32%) explicit statements were made with regards to inclusion and assessment of data from abstracts/presentations. Thirty-eight (60%) identified at least one randomised controlled trial (RCT) available as a conference abstract or presentation. Of these, 26 (68%) included trials available as abstracts/presentations. About 80% (20/26) of the 26 TARs that included RCTs in abstract/presentation form carried out an assessment of the methodological quality of such trials. In 16 TARs full reports of these trials were used for quality assessment where both abstracts/presentations and subsequent full publications were available. Twenty-three of 63 TARs (37%) carried out a quantitative analysis of results. Of these, ten (43%) included trials that were available as abstracts/presentations in the review; however, only 60% (6/10) of these included data from abstracts/presentations in the data analysis of results. Thirteen TARs evaluated rapidly evolving technologies and only three of these identified and included trial data from conference abstracts/presentations and carried out a quantitative analysis where abstract/presentation data were used. These three TARs were used as case studies. In all three case studies the overall quality of reporting in abstracts/presentations was generally poor. In all case studies abstracts and presentations failed to describe the method of randomisation or allocation concealment. Overall, there was no mention of blinding in 66% (25/38) of the abstracts and in 26% (7/27) of the presentations included in case studies, and one presentation (4%) explicitly stated use of intention-to-treat analysis. Results from one case study demonstrated discrepancies in data made available in abstracts or online conference presentations. Not only were discrepancies evident between these sources, but also comparison of conference abstracts/presentations with subsequently published full-length articles demonstrates data discrepancies in reporting of results. Sensitivity analyses based on one case study indicated a change in significance of effect in two outcome measures when only full papers published to date were included.ConclusionsThere are variations in policy and practice across TAR groups regarding searching for and inclusion of studies available as conference abstracts/presentations. There is also variation in the level of detail reported in TARs regarding the use of abstracts/presentations. Therefore, TAR teams should be encouraged to state explicitly their search strategies for identifying conference abstracts and presentations, their methods for assessing these for inclusion, and where appropriate how the data were used and their effect on the results. Comprehensive searching for trials available as conference abstracts/presentations is time consuming and may be of questionable value. However, there may be a case for searching for and including abstract/presentation data if, for example, other sources of data are limited. If conference abstracts/presentations are to be included, the TAR teams need to allocate additional time for searching and managing data from these sources. Incomplete reporting in conference abstracts and presentations limits the ability of reviewers to assess confidently the methodological quality of trials. Where conference abstracts and presentations are considered for inclusion in the review, the TAR teams should increase their efforts to obtain further study details by contacting trialists. Where abstract/presentation data are included, reviewers should discuss the effect of including data from these sources. Any data discrepancies identified across sources in TARs should be highlighted and their impact discussed in the review. In addition, there is a need to carry out, for example, a sensitivity analysis with and without abstract/presentation data in the analysis. There is a need for research into the development of search strategies specific to identification of studies available as conference abstracts and presentations in TARs. Such strategies may include guidance with regard to identification of relevant electronic databases and appropriate conference sites relevant to certain clinical areas. As there are limited case studies included in this report, analyses should be repeated as more TARs accrue, or include the work of other international HTA groups.
Notes
Knowledge, pearl, summary or comment to share?You can also include formatting, links, images and footnotes in your notes
- Simple formatting can be added to notes, such as
*italics*
,_underline_
or**bold**
. - Superscript can be denoted by
<sup>text</sup>
and subscript<sub>text</sub>
. - Numbered or bulleted lists can be created using either numbered lines
1. 2. 3.
, hyphens-
or asterisks*
. - Links can be included with:
[my link to pubmed](http://pubmed.com)
- Images can be included with:
![alt text](https://bestmedicaljournal.com/study_graph.jpg "Image Title Text")
- For footnotes use
[^1](This is a footnote.)
inline. - Or use an inline reference
[^1]
to refer to a longer footnote elseweher in the document[^1]: This is a long footnote.
.