Hepatology : official journal of the American Association for the Study of Liver Diseases
-
Randomized Controlled Trial Comparative Study
Withholding parenteral nutrition during critical illness increases plasma bilirubin but lowers the incidence of biliary sludge.
Cholestatic liver dysfunction (CLD) and biliary sludge often occur during critical illness and are allegedly aggravated by parenteral nutrition (PN). Delaying initiation of PN beyond day 7 in the intensive care unit (ICU) (late PN) accelerated recovery as compared with early initiation of PN (early PN). However, the impact of nutritional strategy on biliary sludge and CLD has not been fully characterized. This was a preplanned subanalysis of a large randomized controlled trial of early PN versus late PN (n = 4,640). In all patients plasma bilirubin (daily) and liver enzymes (alanine aminotransferase [ALT], aspartate aminotransferase [AST], gamma-glutamyl transpeptidase [GGT], alkaline phosphatase [ALP], twice weekly; n = 3,216) were quantified. In a random predefined subset of patients, plasma bile acids (BAs) were also quantified at baseline and on days 3, 5, and last ICU-day (n = 280). Biliary sludge was ultrasonographically evaluated on ICU-day 5 (n = 776). From day 1 after randomization until the end of the 7-day intervention window, bilirubin was higher in the late PN than in the early PN group (P < 0.001). In the late PN group, as soon as PN was started on day 8 bilirubin fell and the two groups became comparable. Maximum levels of GGT, ALP, and ALT were lower in the late PN group (P < 0.01). Glycine/taurine-conjugated primary BAs increased over time in ICU (P < 0.01), similarly for the two groups. Fewer patients in the late PN than in the early PN group developed biliary sludge on day 5 (37% versus 45%; P = 0.04). ⋯ Tolerating substantial caloric deficit by withholding PN until day 8 of critical illness increased plasma bilirubin but reduced the occurrence of biliary sludge and lowered GGT, ALP, and ALT. These results suggest that hyperbilirubinemia during critical illness does not necessarily reflect cholestasis and instead may be an adaptive response that is suppressed by early PN.
-
Comparative Study
Cost analysis of sofosbuvir/ribavirin versus sofosbuvir/simeprevir for genotype 1 hepatitis C virus in interferon-ineligible/intolerant individuals.
Treatment guidance for chronic hepatitis C (CHC) released by the American Association for the Study of Liver Diseases (AASLD) and the Infectious Diseases Society of America (IDSA) offers two options for interferon (IFN)-ineligible/intolerant individuals with genotype 1 infection: sofosbuvir/ribavirin (SOF/RBV) for 24 weeks or sofosbuvir/simeprevir (SOF/SMV) for 12 weeks. A 24-week course of SOF/RBV costs approximately US$169,000, with sustained virologic response (SVR) rates ranging from 52% to 84%; 12 weeks of SOF/SMV costs approximately $150,000, with SVR between 89% and 100%. Because SOF/SMV is currently used off-label, debate exists among physicians and payers about whether it should be prescribed and covered. This article presents a cost-effectiveness analysis of these two treatment regimens accounting for costs of drugs, treatment-related medical care, retreatment for individuals who do not achieve SVR, and natural history of continued HCV infection after failed retreatment. Analysis uses a Markov model with a lifetime horizon and a societal perspective. In the base-case scenario, SOF/SMV dominated SOF/RBV in a modeled 50-year-old cohort of treatment-naïve and -experienced subjects, excluding those who failed earlier therapy with telaprevir or boceprevir. SOF/SMV yielded lower costs and more quality-adjusted life years (QALYs) for the average subject, compared to SOF/RBV ($165,336 and 14.69 QALYs vs. $243,586 and 14.45 QALYs, respectively). In base-case cost analysis, the SOF/SMV treatment strategy saved $91,590 per SVR, compared to SOF/RBV. Under all one-way sensitivity scenarios, SOF/SMV remained dominant and resulted in cost savings. ⋯ These results suggest that a 12-week course of SOF/SMV is a more cost-effective treatment for genotype 1 CHC than 24 weeks of SOF/RBV among IFN-ineligible/intolerant individuals, supporting the AASLD/IDSA guidance and offering implications for both clinical and regulatory decision making as well as pharmaceutical pricing.
-
Although a higher prevalence of raised liver enzymes and altered echotexture on ultrasound have been reported in patients with type 1 diabetes mellitus (T1DM), the histological spectrum and natural history of chronic liver disease (CLD) in T1DM is unknown. We investigated the prevalence and outcome of histologically proven CLD in a longitudinal cohort of patients with T1DM. We identified patients who have had liver biopsy from a computerized database (DIAMOND; Hicom Technology, Brookwood, UK) containing longitudinal data for over 95% of type 1 diabetes patients from an overall catchment population of 700,000 people. Gender-matched patients with oral hypoglycemic-treated (T2OH) and insulin-treated type 2 diabetes (T2IN) who had liver biopsy formed two comparative cohorts. We collated clinical and histological data, as well as long-term outcomes of all three groups, and compared T1DM cirrhosis incidence to UK general population data. Of 4,644 patients with T1DM, 57 (1.2%) underwent liver biopsy. Of these, 53.1% of patients had steatosis, 20.4% had nonalcoholic steatohepatitis, and 73.5% had fibrosis on index liver biopsy. Cirrhosis was diagnosed in 14 patients (24.6%) during follow-up. T1DM with age under 55 years had an odds ratio of 1.875 (95% confidence interval: 0.936-3.757) for cirrhosis incidence, compared to the general population. Longitudinal liver-related outcomes were similar comparing the T1DM cohort and respective type 2 diabetes cohorts--when adjusted for important confounders, diabetic cohort type did not predict altered risk of incident cirrhosis or portal hypertension. ⋯ Type 1 diabetes is associated with a previously unrecognized burden of CLD and its complications.
-
Infections worsen survival in cirrhosis; however, simple predictors of survival in infection-related acute-on-chronic liver failure (I-ACLF) derived from multicenter studies are required in order to improve prognostication and resource allocation. Using the North American Consortium for Study of End-stage Liver Disease (NACSELD) database, data from 18 centers were collected for survival analysis of prospectively enrolled cirrhosis patients hospitalized with an infection. We defined organ failures as 1) shock, 2) grade III/IV hepatic encephalopathy (HE), 3) need for dialysis and mechanical ventilation. Determinants of survival with these organ failures were analyzed. In all, 507 patients were included (55 years, 52% hepatitis C virus [HCV], 15.8% nosocomial infection, 96% Child score ≥ 7) and 30-day evaluations were available in 453 patients. Urinary tract infection (UTI) (28.5%), and spontaneous bacterial peritonitis (SBP) (22.5%) were the most prevalent infections. During hospitalization, 55.7% developed HE, 17.6% shock, 15.1% required renal replacement, and 15.8% needed ventilation; 23% died within 30 days and 21.6% developed second infections. Admitted patients developed none (38.4%), one (37.3%), two (10.4%), three (10%), or four (4%) organ failures. The 30-day survival worsened with a higher number of extrahepatic organ failures, none (92%), one (72.6%), two (51.3%), three (36%), and all four (23%). I-ACLF was defined as ≥ 2 organ failures given the significant change in survival probability associated at this cutoff. Baseline independent predictors for development of ACLF were nosocomial infections, Model for Endstage Liver Disease (MELD) score, low mean arterial pressure (MAP), and non-SBP infections. Independent predictors of poor 30-day survival were I-ACLF, second infections, and admission values of high MELD, low MAP, high white blood count, and low albumin. ⋯ Using multicenter study data in hospitalized decompensated infected cirrhosis patients, I-ACLF defined by the presence of two or more organ failures using simple definitions is predictive of poor survival.
-
Hepatitis B immunization has been documented to prevent fulminant hepatic failure (FHF) and hepatocellular carcinoma (HCC) by historical comparison studies in Taiwan. This study aimed to assess long-term risks and predictors of various liver diseases associated with incomplete immunization in 3.8 million vaccinees. Profiles of the National Hepatitis B Immunization Registry, National Cancer Registry, and National Death Certification Registry were linked to ascertain newly diagnosed cases of HCC and deaths from FHF and chronic liver diseases (CLDs) from infancy to early adulthood of 3,836,988 newborn vaccinees. Cox's proportional hazards models were used to estimate hazard ratios (HRs) for various risk predictors. There were 49 newly developed cases of HCC, 73 deaths from FHF, and 74 deaths from CLDs during the follow-up of 41,854,715 person-years. There were striking differences between unvaccinated and vaccinated newborns after the launch of a national immunization program for HCC incidence (0.293 vs. 0.117 per 100,000 person-years), FHF mortality (0.733 vs. 0.174 per 100,000 person-years), and CLD mortality (2.206 vs. 0.177 per 100,000 person-years). Among vaccinees, incomplete immunization was the most important risk predictor of HCC, FHF, and CLDs, showing an HR (95% confidence interval, P value) of 2.52 (1.25-5.05; P = 0.0094), 4.97 (3.05-8.11; P < 0.0001), and 6.27 (3.62-10.84; P < 0.0001), respectively, after adjustment for maternal hepatitis B serostatus. ⋯ Hepatitis B immunization can significantly prevent the long-term risk of HCC, FHF, and CLDs from infancy to early adulthood. Incomplete immunization with hepatitis B immunoglobulin or vaccines was the most important risk predictor of the liver disease among vaccinees.