Bio

Honors & Awards


  • Saenger Distinguished Service Award, Society for Medical Decision Making (2007)
  • Teaching Award, Stanford Department of Medicine (2006)
  • Most Outstanding Abstracts Award, Academy Health Annual Research Meeting (2006)
  • Outstanding Student in Health Services Management Award, Northwestern University (1992)
  • Austin Scholar, Northwestern University (1990)
  • Arthur Anderson Award, Northwestern University (1990)

Professional Education


  • MM/MBA, Northwestern University(Kellogg), Management, Health Services (1992)
  • BS, Stanford University, Chemical Engineering (1984)

Publications

Journal Articles


  • The Association of Nurse-to-Patient Ratio with Mortality and Preventable Complications Following Aortic Valve Replacement. Journal of cardiac surgery Arkin, N., Lee, P. H., McDonald, K., Hernandez-Boussard, T. 2014; 29 (2): 141-148

    Abstract

    To examine hospital resources associated with patient outcomes for aortic valve replacement (AVR), including inpatient adverse events and mortality.We used the Nationwide Inpatient Sample to identify AVR procedures from 1998 to 2010 and the American Hospital Association Annual Survey to augment hospital characteristics. Primary outcomes included mortality and the development of adverse events, identified using standardized patient safety indicators (PSI). Patient and hospital characteristics associated with PSI development were evaluated using univariate and multivariate analyses.An estimated 410,157 AVRs at 5009 hospitals were performed in the US between 1998 and 2010. The number of procedures grew annually by 4.72% (p = 0.0003) in high volume hospitals, 4.48% in medium volume hospitals (p < 0.0001), and 2.03% in low volume hospitals (p = 0.154). Mortality was highest in low volume hospitals, 4.70%, decreased from 4.14% to 3.73% in medium and high volume hospitals, respectively (p = 0.0002). Rates of PSIs did not vary significantly across volume terciles (p = 0.254). Multivariate logistic regression analysis showed low volume hospitals had increased risk of mortality as compared with high volume hospitals (odds ratio [OR]: 1.42; 95% confidence interval [CI]: 1.01 to 2.00), while hospital volume was not associated with adverse events. PSI development was associated with small hospitals as compared with large (OR: 1.63, 95% CI: 1.16 to 2.28) and inversely associated with higher nurse-to-patient ratio (OR: 0.94, 95% CI: 0.90 to 0.99).The volume-outcomes relationship was associated with mortality outcomes but not postoperative complications. We identified structural differences in hospital size, nurses-to-patient ratio, and nursing skill level indicative of high quality outcomes.

    View details for DOI 10.1111/jocs.12284

    View details for PubMedID 24417274

  • Limitations of using same-hospital readmission metrics INTERNATIONAL JOURNAL FOR QUALITY IN HEALTH CARE Davies, S. M., Saynina, O., McDonald, K. M., Baker, L. C. 2013; 25 (6): 633-639

    Abstract

    To quantify the limitations associated with restricting readmission metrics to same-hospital only readmission.Using 2000-2009 California Office of Statewide Health Planning and Development Patient Discharge Data Nonpublic file, we identified the proportion of 7-, 15- and 30-day readmissions occurring to the same hospital as the initial admission using All-cause Readmission (ACR) and 3M Corporation Potentially Preventable Readmissions (PPR) Metric. We examined the correlation between performance using same and different hospital readmission, the percent of hospitals remaining in the extreme deciles when utilizing different metrics, agreement in identifying outliers and differences in longitudinal performance. Using logistic regression, we examined the factors associated with admission to the same hospital.68% of 30-day ACR and 70% of 30-day PPR occurred to the same hospital. Abdominopelvic procedures had higher proportions of same-hospital readmissions (87.4-88.9%), cardiac surgery had lower (72.5-74.9%) and medical DRGs were lower than surgical DRGs (67.1 vs. 71.1%). Correlation and agreement in identifying high- and low-performing hospitals was weak to moderate, except for 7-day metrics where agreement was stronger (r = 0.23-0.80, Kappa = 0.38-0.76). Agreement for within-hospital significant (P < 0.05) longitudinal change was weak (Kappa = 0.05-0.11). Beyond all patient refined-diagnostic related groups, payer was the most predictive factor with Medicare and MediCal patients having a higher likelihood of same-hospital readmission (OR 1.62, 1.73).Same-hospital readmission metrics are limited for all tested applications. Caution should be used when conducting research, quality improvement or comparative applications that do not account for readmissions to other hospitals.

    View details for DOI 10.1093/intqhc/mzt068

    View details for Web of Science ID 000327791600003

    View details for PubMedID 24167061

  • Implications of Metric Choice for Common Applications of Readmission Metrics HEALTH SERVICES RESEARCH Davies, S., Saynina, O., Schultz, E., McDonald, K. M., Baker, L. C. 2013; 48 (6): 1978-1995

    Abstract

    OBJECTIVE: To quantify the differential impact on hospital performance of three readmission metrics: all-cause readmission (ACR), 3M Potential Preventable Readmission (PPR), and Centers for Medicare and Medicaid 30-day readmission (CMS). DATA SOURCES: 2000-2009 California Office of Statewide Health Planning and Development Patient Discharge Data Nonpublic file. STUDY DESIGN: We calculated 30-day readmission rates using three metrics, for three disease groups: heart failure (HF), acute myocardial infarction (AMI), and pneumonia. Using each metric, we calculated the absolute change and correlation between performance; the percent of hospitals remaining in extreme deciles and level of agreement; and differences in longitudinal performance. PRINCIPAL FINDINGS: Average hospital rates for HF patients and the CMS metric were generally higher than for other conditions and metrics. Correlations between the ACR and CMS metrics were highest (r = 0.67-0.84). Rates calculated using the PPR and either ACR or CMS metrics were moderately correlated (r = 0.50-0.67). Between 47 and 75 percent of hospitals in an extreme decile according to one metric remained when using a different metric. Correlations among metrics were modest when measuring hospital longitudinal change. CONCLUSIONS: Different approaches to computing readmissions can produce different hospital rankings and impact pay-for-performance. Careful consideration should be placed on readmission metric choice for these applications.

    View details for DOI 10.1111/1475-6773.12075

    View details for Web of Science ID 000327392300011

    View details for PubMedID 23742056

  • Determinants of Adverse Events in Vascular Surgery JOURNAL OF THE AMERICAN COLLEGE OF SURGEONS Hernandez-Boussard, T., McDonald, K. M., Morton, J. M., Dalman, R. L., Bech, F. R. 2012; 214 (5): 788-797

    Abstract

    Patient safety is a national priority. Patient Safety Indicators (PSIs) monitor potential adverse events during hospital stays. Surgical specialty PSI benchmarks do not exist, and are needed to account for differences in the range of procedures performed, reasons for the procedure, and differences in patient characteristics. A comprehensive profile of adverse events in vascular surgery was created.The Nationwide Inpatient Sample was queried for 8 vascular procedures using ICD-9-CM codes from 2005 to 2009. Factors associated with PSI development were evaluated in univariate and multivariate analyses.A total of 1,412,703 patients underwent a vascular procedure and a PSI developed in 5.2%. PSIs were more frequent in female, nonwhite patients with public payers (p < 0.01). Patients at mid and low-volume hospitals had greater odds of developing a PSI (odds ratio [OR] = 1.17; 95% CI, 1.10-1.23 and OR = 1.69; 95% CI, 1.53-1.87). Amputations had highest PSI risk-adjusted rate and carotid endarterectomy and endovascular abdominal aortic aneurysm repair had lower risk-adjusted rate (p < 0.0001). PSI risk-adjusted rate increased linearly by severity of patient indication: claudicants (OR = 0.40; 95% CI, 0.35-0.46), rest pain patients (OR = 0.78; 95% CI, 0.69-0.90), ulcer (OR = 1.20; 95% CI, 1.07-1.34), and gangrene patients (OR = 1.85; 95% CI, 1.66-2.06).Patient safety events in vascular surgery were high and varied by procedure, with amputations and open abdominal aortic aneurysm repair having considerably more potential adverse events. PSIs were associated with black race, public payer, and procedure indication. It is important to note the overall higher rates of PSIs occurring in vascular patients and to adjust benchmarks for this surgical specialty appropriately.

    View details for DOI 10.1016/j.jamcollsurg.2012.01.045

    View details for Web of Science ID 000303724200009

    View details for PubMedID 22425449

  • Relationship between Patient Safety and Hospital Surgical Volume HEALTH SERVICES RESEARCH Hernandez-Boussard, T., Downey, J. R., McDonald, K., Morton, J. M. 2012; 47 (2): 756-769

    Abstract

    To examine the relationship between hospital volume and in-hospital adverse events.Patient safety indicator (PSI) was used to identify hospital-acquired adverse events in the Nationwide Inpatient Sample database in abdominal aortic aneurysm, coronary artery bypass graft, and Roux-en-Y gastric bypass from 2005 to 2008.In this observational study, volume thresholds were defined by mean year-specific terciles. PSI risk-adjusted rates were analyzed by volume tercile for each procedure.Overall, hospital volume was inversely related to preventable adverse events. High-volume hospitals had significantly lower risk-adjusted PSI rates compared to lower volume hospitals (p < .05).These data support the relationship between hospital volume and quality health care delivery in select surgical cases. This study highlights differences between hospital volume and risk-adjusted PSI rates for three common surgical procedures and highlights areas of focus for future studies to identify pathways to reduce hospital-acquired events.

    View details for DOI 10.1111/j.1475-6773.2011.01310.x

    View details for Web of Science ID 000301229300012

    View details for PubMedID 22091561

  • Assessment of a Novel Hybrid Delphi and Nominal Groups Technique to Evaluate Quality Indicators HEALTH SERVICES RESEARCH Davies, S., Romano, P. S., Schmidt, E. M., Schultz, E., Geppert, J. J., McDonald, K. M. 2011; 46 (6): 2005-2018

    Abstract

    To test the implementation of a novel structured panel process in the evaluation of quality indicators.National panel of 64 clinicians rating usefulness of indicator applications in 2008-2009.Hybrid panel combined Delphi Group and Nominal Group (NG) techniques to evaluate 81 indicator applications.The Delphi Group and NG rated 56 percent of indicator applications similarly. Group assignment (Delphi versus Nominal) was not significantly associated with mean ratings, but specialty and research interests of panelists, and indicator factors such as denominator level and proposed use were. Rating distributions narrowed significantly in 20.8 percent of applications between review rounds.The hybrid panel process facilitated information exchange and tightened rating distributions. Future assessments of this method might include a control panel.

    View details for DOI 10.1111/j.1475-6773.2011.01297.x

    View details for Web of Science ID 000297244000017

    View details for PubMedID 21790589

  • Expanding the Uses of AHRQ's Prevention Quality Indicators Validity From the Clinician Perspective MEDICAL CARE Davies, S., McDonald, K. M., Schmidt, E., Schultz, E., Geppert, J., Romano, P. S. 2011; 49 (8): 679-685

    Abstract

    The Agency for Healthcare Research and Quality's prevention quality indicators (PQIs) are used as a metric of area-level access to quality care. Recently, interest has expanded to using the measures at the level of payer or large physician groups, including public reporting or pay-for-performance programs. However, the validity of these expanded applications is unknown.We conducted a novel panel process to establish face validity of the 12 PQIs at 3 denominator levels: geographic area, payer, and large physician groups; and 3 uses: quality improvement, comparative reporting, and pay for performance. Sixty-four clinician panelists were split into Delphi and Nominal Groups. We aimed to capitalize on the reliability of the Delphi method and information sharing in the Nominal group method by applying these techniques simultaneously. We examined panelists' perceived usefulness of the indicators for specific uses using median scores and agreement within and between groups.Panelists showed stronger support of the usefulness of chronic disease indicators at the payer and large physician group levels than for acute disease indicators. Panelists fully supported the usefulness of 2 indicators for comparative reporting (asthma, congestive heart failure) and no indicators for pay-for-performance applications. Panelists expressed serious concerns about the usefulness of all new applications of 3 indicators (angina, perforated appendix, dehydration). Panelists rated age, current comorbidities, earlier hospitalization, and socioeconomic status as the most important risk-adjustment factors.Clinicians supported some expanded uses of the PQIs, but generally expressed reservations. Attention to denominator definitions and risk adjustment are essential for expanded use.

    View details for DOI 10.1097/MLR.0b013e3182159e65

    View details for Web of Science ID 000292758500001

    View details for PubMedID 21478780

  • Systematic Review: Benefits and Harms of In-Hospital Use of Recombinant Factor VIIa for Off-Label Indications ANNALS OF INTERNAL MEDICINE Yank, V., Tuohy, C. V., Logan, A. C., Bravata, D. M., Staudenmayer, K., Eisenhut, R., Sundaram, V., McMahon, D., Olkin, I., McDonald, K. M., Owens, D. K., Stafford, R. S. 2011; 154 (8): 529-W190

    Abstract

    Recombinant factor VIIa (rFVIIa), a hemostatic agent approved for hemophilia, is increasingly used for off-label indications.To evaluate the benefits and harms of rFVIIa use for 5 off-label, in-hospital indications: intracranial hemorrhage, cardiac surgery, trauma, liver transplantation, and prostatectomy.Ten databases (including PubMed, EMBASE, and the Cochrane Library) queried from inception through December 2010. Articles published in English were analyzed.Two reviewers independently screened titles and abstracts to identify clinical use of rFVIIa for the selected indications and identified all randomized, controlled trials (RCTs) and observational studies for full-text review.Two reviewers independently assessed study characteristics and rated study quality and indication-wide strength of evidence.16 RCTs, 26 comparative observational studies, and 22 noncomparative observational studies met inclusion criteria. Identified comparators were limited to placebo (RCTs) or usual care (observational studies). For intracranial hemorrhage, mortality was not improved with rFVIIa use across a range of doses. Arterial thromboembolism was increased with medium-dose rFVIIa use (risk difference [RD], 0.03 [95% CI, 0.01 to 0.06]) and high-dose rFVIIa use (RD, 0.06 [CI, 0.01 to 0.11]). For adult cardiac surgery, there was no mortality difference, but there was an increased risk for thromboembolism (RD, 0.05 [CI, 0.01 to 0.10]) with rFVIIa. For body trauma, there were no differences in mortality or thromboembolism, but there was a reduced risk for the acute respiratory distress syndrome (RD, -0.05 [CI, -0.02 to -0.08]). Mortality was higher in observational studies than in RCTs.The amount and strength of evidence were low for most outcomes and indications. Publication bias could not be excluded.Limited available evidence for 5 off-label indications suggests no mortality reduction with rFVIIa use. For some indications, it increases thromboembolism.

    View details for Web of Science ID 000289622000016

    View details for PubMedID 21502651

  • THE INFLUENCE OF ECONOMIC INCENTIVES AND REGULATORY FACTORS ON THE ADOPTION OF TREATMENT TECHNOLOGIES: A CASE STUDY OF TECHNOLOGIES USED TO TREAT HEART ATTACKS HEALTH ECONOMICS Bech, M., Christiansen, T., Dunham, K., Lauridsen, J., Lyttkens, C. H., McDonald, K., McGuire, A. 2009; 18 (10): 1114-1132

    Abstract

    The Technological Change in Health Care Research Network collected unique patient-level data on three procedures for treatment of heart attack patients (catheterization, coronary artery bypass grafts and percutaneous transluminal coronary angioplasty) for 17 countries over a 15-year period to examine the impact of economic and institutional factors on technology adoption. Specific institutional factors are shown to be important to the uptake of these technologies. Health-care systems characterized as public contract systems and reimbursement systems have higher adoption rates than public-integrated health-care systems. Central control of funding of investments is negatively associated with adoption rates and the impact is of the same magnitude as the overall health-care system classification. GDP per capita also has a strong role in initial adoption. The impact of income and institutional characteristics on the utilization rates of the three procedures diminishes over time.

    View details for DOI 10.1002/hec.1417

    View details for Web of Science ID 000269942100002

    View details for PubMedID 18972326

  • Systematic Review: Elective Induction of Labor Versus Expectant Management of Pregnancy ANNALS OF INTERNAL MEDICINE Caughey, A. B., Sundaram, V., Kaimal, A. J., Gienger, A., Cheng, Y. W., McDonald, K. M., Shaffer, B. L., Owens, D. K., Bravata, D. M. 2009; 151 (4): 252-W63

    Abstract

    The rates of induction of labor and elective induction of labor are increasing. Whether elective induction of labor improves outcomes or simply leads to greater complications and health care costs is commonly debated in the literature.To compare the benefits and harms of elective induction of labor and expectant management of pregnancy.MEDLINE (through February 2009), Web of Science, CINAHL, Cochrane Central Register of Controlled Trials (through March 2009), bibliographies of included studies, and previous systematic reviews.Experimental and observational studies of elective induction of labor reported in English.Two authors abstracted study design; patient characteristics; quality criteria; and outcomes, including cesarean delivery and maternal and neonatal morbidity.Of 6117 potentially relevant articles, 36 met inclusion criteria: 11 randomized, controlled trials (RCTs) and 25 observational studies. Overall, expectant management of pregnancy was associated with a higher odds ratio (OR) of cesarean delivery than was elective induction of labor (OR, 1.22 [95% CI, 1.07 to 1.39]; absolute risk difference, 1.9 percentage points [CI, 0.2 to 3.7 percentage points]) in 9 RCTs. Women at or beyond 41 completed weeks of gestation who were managed expectantly had a higher risk for cesarean delivery (OR, 1.21 [CI, 1.01 to 1.46]), but this difference was not statistically significant in women at less than 41 completed weeks of gestation (OR, 1.73 [CI, 0.67 to 4.5]). Women who were expectantly managed were more likely to have meconium-stained amniotic fluid than those who were electively induced (OR, 2.04 [CI, 1.34 to 3.09]). Limitations: There were no recent RCTs of elective induction of labor at less than 41 weeks of gestation. The 2 studies conducted at less than 41 weeks of gestation were of poor quality and were not generalizable to current practice.RCTs suggest that elective induction of labor at 41 weeks of gestation and beyond is associated with a decreased risk for cesarean delivery and meconium-stained amniotic fluid. There are concerns about the translation of these findings into actual practice; thus, future studies should examine elective induction of labor in settings where most obstetric care is provided.

    View details for Web of Science ID 000269038900005

    View details for PubMedID 19687492

  • Approach to Improving Quality: the Role of Quality Measurement and a Case Study of the Agency for Healthcare Research and Quality Pediatric Quality Indicators PEDIATRIC CLINICS OF NORTH AMERICA McDonald, K. M. 2009; 56 (4): 815-?

    Abstract

    Data and well-constructed measures quantify suboptimal quality in health care and play a crucial role in improving quality. Measures are useful for three major purposes: (1) driving improvements in outcomes of care by prioritizing and selecting appropriate interventions, (2) developing comparative quality reports for consumer and payer decision making and health system accountability, and (3) creating incentives that pay for performance. This article describes the current landscape for measurement in pediatrics compared to adult care, provides a case study of the development and application of a publicly available and federally funded pediatric indicator set using routinely collected hospital discharge data, and addresses challenges and opportunities in selecting and using measures as a function of intended purpose.

    View details for DOI 10.1016/j.pcl.2009.05.009

    View details for Web of Science ID 000269933000009

    View details for PubMedID 19660629

  • Inequality in treatment use among elderly patients with acute myocardial infarction: USA, Belgium and Quebec BMC HEALTH SERVICES RESEARCH Perelman, J., Shmueli, A., McDonald, K. M., Pilote, L., Saynina, O., Closon, M. 2009; 9

    Abstract

    Previous research has provided evidence that socioeconomic status has an impact on invasive treatments use after acute myocardial infarction. In this paper, we compare the socioeconomic inequality in the use of high-technology diagnosis and treatment after acute myocardial infarction between the US, Quebec and Belgium paying special attention to financial incentives and regulations as explanatory factors.We examined hospital-discharge abstracts for all patients older than 65 who were admitted to hospitals during the 1993-1998 period in the US, Quebec and Belgium with a primary diagnosis of acute myocardial infarction. Patients' income data were imputed from the median incomes of their residential area. For each country, we compared the risk-adjusted probability of undergoing each procedure between socioeconomic categories measured by the patient's area median income.Our findings indicate that income-related inequality exists in the use of high-technology treatment and diagnosis techniques that is not justified by differences in patients' health characteristics. Those inequalities are largely explained, in the US and Quebec, by inequalities in distances to hospitals with on-site cardiac facilities. However, in both Belgium and the US, inequalities persist among patients admitted to hospitals with on-site cardiac facilities, rejecting the hospital location effect as the single explanation for inequalities. Meanwhile, inequality levels diverge across countries (higher in the US and in Belgium, extremely low in Quebec).The findings support the hypothesis that income-related inequality in treatment for AMI exists and is likely to be affected by a country's system of health care.

    View details for DOI 10.1186/1472-6963-9-130

    View details for Web of Science ID 000269526100002

    View details for PubMedID 19643011

  • Quality Improvement Strategies for Children With Asthma A Systematic Review ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE Bravata, D. M., Gienger, A. L., Holty, J. C., Sundaram, V., Khazeni, N., Wise, P. H., McDonald, K. M., Owens, D. K. 2009; 163 (6): E1-E5
  • Quality Improvement Strategies for Children With Asthma A Systematic Review ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE Bravata, D. M., Gienger, A. L., Holty, J. C., Sundaram, V., Khazeni, N., Wise, P. H., McDonald, K. M., Owens, D. K. 2009; 163 (6): 572-581

    Abstract

    To evaluate the evidence that quality improvement (QI) strategies can improve the processes and outcomes of outpatient pediatric asthma care.Cochrane Effective Practice and Organisation of Care Group database (January 1966 to April 2006), MEDLINE (January 1966 to April 2006), Cochrane Consumers and Communication Group database (January 1966 to May 2006), and bibliographies of retrieved articles.Randomized controlled trials, controlled before-after trials, or interrupted time series trials of English-language QI evaluations.Must have included 1 or more QI strategies for the outpatient management of children with asthma.Clinical status (eg, spirometric measures); functional status (eg, days lost from school); and health services use (eg, hospital admissions).Seventy-nine studies met inclusion criteria: 69 included at least some component of patient education, self-monitoring, or self-management; 13 included some component of organizational change; and 7 included provider education. Self-management interventions increased symptom-free days by approximately 10 days/y (P = .02) and reduced school absenteeism by about 0.1 day/mo (P = .03). Interventions of provider education and those that incorporated organizational changes were likely to report improvements in medication use. Quality improvement interventions that provided multiple educational sessions, had longer durations, and used combinations of instructional modalities were more likely to result in improvements for patients than interventions lacking these characteristics.A variety of QI interventions improve the outcomes and processes of care for children with asthma. Use of similar outcome measures and thorough descriptions of interventions would advance the study of QI for pediatric asthma care.

    View details for Web of Science ID 000266566700011

    View details for PubMedID 19487615

  • CABG versus PCl for multivessel coronary artery disease Reply LANCET Hlatky, M. A., Bravata, D. M., McDonald, K. M., Owens, D. K. 2009; 373 (9682): 2200-2200
  • Coronary artery bypass surgery compared with percutaneous coronary interventions for multivessel disease: a collaborative analysis of individual patient data from ten randomised trials LANCET Hlatky, M. A., Boothroyd, D. B., Bravata, D. M., Boersma, E., Booth, J., Brooks, M. M., Carrie, D., Clayton, T. C., Danchin, N., Flather, M., Hamm, C. W., Hueb, W. A., Kaehler, J., Kelsey, S. F., King, S. B., Kosinski, A. S., Lopes, N., McDonald, K. M., Rodriguez, A., Serruys, P., Sigwart, U., Stables, R. H., Owens, D. K., Pocock, S. J. 2009; 373 (9670): 1190-1197

    Abstract

    Coronary artery bypass graft (CABG) and percutaneous coronary intervention (PCI) are alternative treatments for multivessel coronary disease. Although the procedures have been compared in several randomised trials, their long-term effects on mortality in key clinical subgroups are uncertain. We undertook a collaborative analysis of data from randomised trials to assess whether the effects of the procedures on mortality are modified by patient characteristics.We pooled individual patient data from ten randomised trials to compare the effectiveness of CABG with PCI according to patients' baseline clinical characteristics. We used stratified, random effects Cox proportional hazards models to test the effect on all-cause mortality of randomised treatment assignment and its interaction with clinical characteristics. All analyses were by intention to treat.Ten participating trials provided data on 7812 patients. PCI was done with balloon angioplasty in six trials and with bare-metal stents in four trials. Over a median follow-up of 5.9 years (IQR 5.0-10.0), 575 (15%) of 3889 patients assigned to CABG died compared with 628 (16%) of 3923 patients assigned to PCI (hazard ratio [HR] 0.91, 95% CI 0.82-1.02; p=0.12). In patients with diabetes (CABG, n=615; PCI, n=618), mortality was substantially lower in the CABG group than in the PCI group (HR 0.70, 0.56-0.87); however, mortality was similar between groups in patients without diabetes (HR 0.98, 0.86-1.12; p=0.014 for interaction). Patient age modified the effect of treatment on mortality, with hazard ratios of 1.25 (0.94-1.66) in patients younger than 55 years, 0.90 (0.75-1.09) in patients aged 55-64 years, and 0.82 (0.70-0.97) in patients 65 years and older (p=0.002 for interaction). Treatment effect was not modified by the number of diseased vessels or other baseline characteristics.Long-term mortality is similar after CABG and PCI in most patient subgroups with multivessel coronary artery disease, so choice of treatment should depend on patient preferences for other outcomes. CABG might be a better option for patients with diabetes and patients aged 65 years or older because we found mortality to be lower in these subgroups.

    View details for DOI 10.1016/S0140-6736(09)60552-3

    View details for Web of Science ID 000264940600033

    View details for PubMedID 19303634

  • Maternal and neonatal outcomes of elective induction of labor. Evidence report/technology assessment Caughey, A. B., Sundaram, V., Kaimal, A. J., Cheng, Y. W., Gienger, A., Little, S. E., Lee, J. F., Wong, L., Shaffer, B. L., Tran, S. H., Padula, A., McDonald, K. M., Long, E. F., Owens, D. K., Bravata, D. M. 2009: 1-257

    Abstract

    Induction of labor is on the rise in the U.S., increasing from 9.5 percent in 1990 to 22.1 percent in 2004. Although, it is not entirely clear what proportion of these inductions are elective (i.e. without a medical indication), the overall rate of induction of labor is rising faster than the rate of pregnancy complications that would lead to a medically indicated induction. However, the maternal and neonatal effects of induction of labor are unclear. Many studies compare women with induction of labor to those in spontaneous labor. This is problematic, because at any point in the management of the woman with a term gestation, the clinician has the choice between induction of labor and expectant management, not spontaneous labor. Expectant management of the pregnancy involves nonintervention at any particular point in time and allowing the pregnancy to progress to a future gestational age. Thus, women undergoing expectant management may go into spontaneous labor or may require indicated induction of labor at a future gestational age.The Stanford-UCSF Evidence-Based Practice Center examined the evidence regarding four Key Questions: What evidence describes the maternal risks of elective induction versus expectant management? What evidence describes the fetal/neonatal risks of elective induction versus expectant management? What is the evidence that certain physical conditions/patient characteristics are predictive of a successful induction of labor? How is a failed induction defined?We performed a systematic review to answer the Key Questions. We searched MEDLINE(1966-2007) and bibliographies of prior systematic reviews and the included studies for English language studies of maternal and fetal outcomes after elective induction of labor. We evaluated the quality of included studies. When possible, we synthesized study data using random effects models. We also evaluated the potential clinical outcomes and cost-effectiveness of elective induction of labor versus expectant management of pregnancy labor at 41, 40, and 39 weeks' gestation using decision-analytic models.Our searches identified 3,722 potentially relevant articles, of which 76 articles met inclusion criteria. Nine RCTs compared expectant management with elective induction of labor. We found that overall, expectant management of pregnancy was associated with an approximately 22 percent higher odds of cesarean delivery than elective induction of labor (OR 1.22, 95 percent CI 1.07-1.39; absolute risk difference 1.9, 95 percent CI: 0.2-3.7 percent). The majority of these studies were in women at or beyond 41 weeks of gestation (OR 1.21, 95 percent CI 1.01-1.46). In studies of women at or beyond 41 weeks of gestation, the evidence was rated as moderate because of the size and number of studies and consistency of the findings. Among women less than 41 weeks of gestation, there were three trials which reported no difference in risk of cesarean delivery among women who were induced as compared to expectant management (OR 1.73; 95 percent CI: 0.67-4.5, P=0.26), but all of these trials were small, non-U.S., older, and of poor quality. When we stratified the analysis by country, we found that the odds of cesarean delivery were higher in women who were expectantly managed compared to elective induction of labor in studies conducted outside the U.S. (OR 1.22; 95 percent CI 1.05-1.40) but were not statistically different in studies conducted in the U.S. (OR 1.28; 95 percent CI 0.65-2.49). Women who were expectantly managed were also more likely to have meconium-stained amniotic fluid than those who were electively induced (OR 2.04; 95 percent CI: 1.34-3.09). Observational studies reported a consistently lower risk of cesarean delivery among women who underwent spontaneous labor (6 percent) compared with women who had an elective induction of labor (8 percent) with a statistically significant decrease when combined (OR 0.63; 95 percent CI: 0.49-0.79), but again utilized the wrong control group and did not appropriately adjust for gestational age. We found moderate to high quality evidence that increased parity, a more favorable cervical status as assessed by a higher Bishop score, and decreased gestational age were associated with successful labor induction (58 percent of the included studies defined success as achieving a vaginal delivery anytime after the onset of the induction of labor; in these instances, induction was considered a failure when it led to a cesarean delivery). In the decision analytic model, we utilized a baseline assumption of no difference in cesarean delivery between the two arms as there was no statistically significant difference in the U.S. studies or in women prior to 41 0/7 weeks of gestation. In each of the models, women who were electively induced had better overall outcomes among both mothers and neonates as estimated by total quality-adjusted life years (QALYs) as well as by reduction in specific perinatal outcomes such as shoulder dystocia, meconium aspiration syndrome, and preeclampsia. Additionally, induction of labor was cost-effective at $10,789 per QALY with elective induction of labor at 41 weeks of gestation, $9,932 per QALY at 40 weeks of gestation, and $20,222 per QALY at 39 weeks of gestation utilizing a cost-effectiveness threshold of $50,000 per QALY. At 41 weeks of gestation, these results were generally robust to variations in the assumed ranges in univariate and multi-way sensitivity analyses. However, the findings of cost-effectiveness at 40 and 39 weeks of gestation were not robust to the ranges of the assumptions. In addition, the strength of evidence for some model inputs was low, therefore our analyses are exploratory rather than definitive.Randomized controlled trials suggest that elective induction of labor at 41 weeks of gestation and beyond may be associated with a decrease in both the risk of cesarean delivery and of meconium-stained amniotic fluid. The evidence regarding elective induction of labor prior to 41 weeks of gestation is insufficient to draw any conclusion. There is a paucity of information from prospective RCTs examining other maternal or neonatal outcomes in the setting of elective induction of labor. Observational studies found higher rates of cesarean delivery with elective induction of labor, but compared women undergoing induction of labor to women in spontaneous labor and were subject to potential confounding bias, particularly from gestational age. Such studies do not inform the question of how elective induction of labor affects maternal or neonatal outcomes. Elective induction of labor at 41 weeks of gestation and potentially earlier also appears to be a cost-effective intervention, but because of the need for further data to populate these models our analyses are not definitive. Despite the evidence from the prospective, RCTs reported above, there are concerns about the translation of such findings into actual practice, thus, there is a great need for studying the translation of such research into settings where the majority of obstetric care is provided.

    View details for PubMedID 19408970

  • Isolated Disease of the Proximal Left Anterior Descending Artery Comparing the Effectiveness of Percutaneous Coronary Interventions and Coronary Artery Bypass Surgery JACC-CARDIOVASCULAR INTERVENTIONS Kapoor, J. R., Gienger, A. L., Ardehali, R., Varghese, R., Perez, M. V., Sundaram, V., McDonald, K. M., Owens, D. K., Hlatky, M. A., Bravata, D. M. 2008; 1 (5): 483-491

    Abstract

    This study sought to systematically compare the effectiveness of percutaneous coronary intervention and coronary artery bypass surgery in patients with single-vessel disease of the proximal left anterior descending (LAD) coronary artery.It is uncertain whether percutaneous coronary interventions (PCI) or coronary artery bypass grafting (CABG) surgery provides better clinical outcomes among patients with single-vessel disease of the proximal LAD.We searched relevant databases (MEDLINE, EMBASE, and Cochrane from 1966 to 2006) to identify randomized controlled trials that compared outcomes for patients with single-vessel proximal LAD assigned to either PCI or CABG.We identified 9 randomized controlled trials that enrolled a total of 1,210 patients (633 received PCI and 577 received CABG). There were no differences in survival at 30 days, 1 year, or 5 years, nor were there differences in the rates of procedural strokes or myocardial infarctions, whereas the rate of repeat revascularization was significantly less after CABG than after PCI (at 1 year: 7.3% vs. 19.5%; at 5 years: 7.3% vs. 33.5%). Angina relief was significantly greater after CABG than after PCI (at 1 year: 95.5% vs. 84.6%; at 5 years: 84.2% vs. 75.6%). Patients undergoing CABG spent 3.2 more days in the hospital than those receiving PCI (95% confidence interval: 2.3 to 4.1 days, p < 0.0001), required more transfusions, and were more likely to have arrhythmias immediately post-procedure.In patients with single-vessel, proximal LAD disease, survival was similar in CABG-assigned and PCI-assigned patients; CABG was significantly more effective in relieving angina and led to fewer repeat revascularizations.

    View details for DOI 10.1016/j.jcin.2008.07.001

    View details for Web of Science ID 000207586300004

    View details for PubMedID 19463349

  • Preliminary assessment of pediatric health care quality and patient safety in the United States using readily available administrative data PEDIATRICS McDonald, K. M., Davies, S. M., Haberland, C. A., Geppert, J. J., Ku, A., Romano, P. S. 2008; 122 (2): E416-E425

    Abstract

    With >6 million hospital stays, costing almost $50 billion annually, hospitalized children represent an important population for which most inpatient quality indicators are not applicable. Our aim was to develop indicators using inpatient administrative data to assess aspects of the quality of inpatient pediatric care and access to quality outpatient care.We adapted the Agency for Healthcare Research and Quality quality indicators, a publicly available set of measurement tools refined previously by our team, for a pediatric population. We systematically reviewed the literature for evidence regarding coding and construct validity specific to children. We then convened 4 expert panels to review and discuss the evidence and asked them to rate each indicator through a 2-stage modified Delphi process. From the 2000 and 2003 Agency for Healthcare Research and Quality Healthcare Cost and Utilization Project Kids' Inpatient Database, we generated national estimates for provider level indicators and for area level indicators.Panelists recommended 18 indicators for inclusion in the pediatric quality indicator set based on overall usefulness for quality improvement efforts. The indicators included 13 hospital-level indicators, including 11 based on complications, 1 based on mortality, and 1 based on volume, as well as 5 area-level potentially preventable hospitalization indicators. National rates for all 18 of the indicators varied minimally between years. Rates in high-risk strata are notably higher than in the overall groups: in 2003 the decubitus ulcer pediatric quality indicator rate was 3.12 per 1000, whereas patients with limited mobility experienced a rate of 22.83. Trends in rates by age varied across pediatric quality indicators: short-term complications of diabetes increased with age, whereas admissions for gastroenteritis decreased with age.Tracking potentially preventable complications and hospitalizations has the potential to help prioritize quality improvement efforts at both local and national levels, although additional validation research is needed to confirm the accuracy of coding.

    View details for DOI 10.1542/peds.2007-2477

    View details for Web of Science ID 000258142500062

    View details for PubMedID 18676529

  • Modeling the logistics of response to anthrax bioterrorism MEDICAL DECISION MAKING Zaric, G. S., Bravata, D. M., Holty, J. C., McDonald, K. M., Owens, D. K., Brandeau, M. L. 2008; 28 (3): 332-350

    Abstract

    A bioterrorism attack with an agent such as anthrax will require rapid deployment of medical and pharmaceutical supplies to exposed individuals. How should such a logistical system be organized? How much capacity should be built into each element of the bioterrorism response supply chain?The authors developed a compartmental model to evaluate the costs and benefits of various strategies for preattack stockpiling and postattack distribution and dispensing of medical and pharmaceutical supplies, as well as the benefits of rapid attack detection.The authors show how the model can be used to address a broad range of logistical questions as well as related, nonlogistical questions (e.g., the cost-effectiveness of strategies to improve patient adherence to antibiotic regimens). They generate several key insights about appropriate strategies for local communities. First, stockpiling large local inventories of medical and pharmaceutical supplies is unlikely to be the most effective means of reducing mortality from an attack, given the availability of national and regional supplies. Instead, communities should create sufficient capacity for dispensing prophylactic antibiotics in the event of a large-scale bioterror attack. Second, improved surveillance systems can significantly reduce deaths from such an attack but only if the local community has sufficient antibiotic-dispensing capacity. Third, mortality from such an attack is significantly affected by the number of unexposed individuals seeking prophylaxis and treatment. Fourth, full adherence to treatment regimens is critical for reducing expected mortality.Effective preparation for response to potential bioterror attacks can avert deaths in the event of an attack. Models such as this one can help communities more effectively prepare for response to potential bioterror attacks.

    View details for DOI 10.1177/0272989X07312721

    View details for Web of Science ID 000256264500006

    View details for PubMedID 18349432

  • Implementing Effective Hypertension Quality Improvement Strategies: Barriers and Potential Solutions JOURNAL OF CLINICAL HYPERTENSION Walsh, J. M., Sundaram, V., McDonald, K., Owens, D. K., Goldstein, M. K. 2008; 10 (4): 311-316

    Abstract

    Many quality improvement strategies have focused on improving blood pressure control, and these strategies can target the patient, the provider, and/or the system. Strategies that seem to have the biggest effect on blood pressure outcomes are team change, patient education, facilitated relay of clinical information, and promotion of self-management. Barriers to effective blood pressure control can affect the patient, the physician, the system, and/or "cues to action."We review the barriers to achieving blood pressure control and describe current and potential creative strategies for optimizing blood pressure control. These include home-based disease management, combined patient and provider education, and automatic decision support systems. Future research must address which components of quality improvement interventions are most successful in achieving blood pressure control.

    View details for Web of Science ID 000261099600008

    View details for PubMedID 18401229

  • Modeling the logistics of response to anthrax bioterrorism. Med Decis Making Zaric GS, Bravata DM, Holty J-EC, McDonald KM, Owens DK, Brandeau ML 2008; 28 (3): 332-50
  • Implementing effective hypertension quality improvement strategies: barriers and potential solutions J Clin Hypertens Walsh JM, Sundaram V, McDonald KM, Owens DK, Goldstein MK 2008; 10 (4): 311-6
  • Systematic review: The comparative effectiveness of percutaneous coronary interventions and coronary artery bypass graft surgery ANNALS OF INTERNAL MEDICINE Bravata, D. M., Gienger, A. L., McDonald, K. M., Sundaram, V., Perez, M. V., Varghese, R., Kapoor, J. R., Ardehali, R., Owens, D. K., Hlatky, M. A. 2007; 147 (10): 703-U139

    Abstract

    The comparative effectiveness of coronary artery bypass graft (CABG) surgery and percutaneous coronary intervention (PCI) for patients in whom both procedures are feasible remains poorly understood.To compare the effectiveness of PCI and CABG in patients for whom coronary revascularization is clinically indicated.MEDLINE, EMBASE, and Cochrane databases (1966-2006); conference proceedings; and bibliographies of retrieved articles.Randomized, controlled trials (RCTs) reported in any language that compared clinical outcomes of PCI with those of CABG, and selected observational studies.Information was extracted on study design, sample characteristics, interventions, and clinical outcomes.The authors identified 23 RCTs in which 5019 patients were randomly assigned to PCI and 4944 patients were randomly assigned to CABG. The difference in survival after PCI or CABG was less than 1% over 10 years of follow-up. Survival did not differ between PCI and CABG for patients with diabetes in the 6 trials that reported on this subgroup. Procedure-related strokes were more common after CABG than after PCI (1.2% vs. 0.6%; risk difference, 0.6%; P = 0.002). Angina relief was greater after CABG than after PCI, with risk differences ranging from 5% to 8% at 1 to 5 years (P < 0.001). The absolute rates of angina relief at 5 years were 79% after PCI and 84% after CABG. Repeated revascularization was more common after PCI than after CABG (risk difference, 24% at 1 year and 33% at 5 years; P < 0.001); the absolute rates at 5 years were 46.1% after balloon angioplasty, 40.1% after PCI with stents, and 9.8% after CABG. In the observational studies, the CABG-PCI hazard ratio for death favored PCI among patients with the least severe disease and CABG among those with the most severe disease.The RCTs were conducted in leading centers in selected patients. The authors could not assess whether comparative outcomes vary according to clinical factors, such as extent of coronary disease, ejection fraction, or previous procedures. Only 1 small trial used drug-eluting stents.Compared with PCI, CABG was more effective in relieving angina and led to fewer repeated revascularizations but had a higher risk for procedural stroke. Survival to 10 years was similar for both procedures.

    View details for Web of Science ID 000251259500005

    View details for PubMedID 17938385

  • Inhalational, gastrointestinal, and cutaneous anthrax in children ARCHIVES OF PEDIATRICS & ADOLESCENT MEDICINE Bravata, D. M., Holty, J. C., Wang, E., Lewis, R., Wise, P. H., McDonald, K. M., Owens, D. K. 2007; 161 (9): 896-905

    Abstract

    To systematically review all published case reports of children with anthrax to evaluate the predictors of disease progression and mortality.Fourteen selected journal indexes (1900-1966), MEDLINE (1966-2005), and the bibliographies of all retrieved articles.Case reports (any language) of anthrax in persons younger than 18 years published between January 1, 1900, and December 31, 2005. Main Exposures Cases with symptoms and culture or Gram stain or autopsy evidence of anthrax infection.Disease progression, treatment responses, and mortality.Of 2499 potentially relevant articles, 73 case reports of pediatric anthrax (5 inhalational cases, 22 gastrointestinal cases, 37 cutaneous cases, 6 cases of primary meningoencephalitis, and 3 atypical cases) met the inclusion criteria. Only 10% of the patients were younger than 2 years, and 24% were girls. Of the few children with inhalational anthrax, none had nonheadache neurologic symptoms, a key finding that distinguishes adult inhalational anthrax from more common illnesses, such as influenza. Overall, observed mortality was 60% (3 of 5) for inhalational anthrax, 65% (13 of 20) for gastrointestinal anthrax, 14% (5 of 37) for cutaneous anthrax, and 100% (6 of 6) for primary meningoencephalitis. Nineteen of the 30 children (63%) who received penicillin-based antibiotics survived, and 9 of the 11 children (82%) who received anthrax antiserum survived.The clinical presentation of children with anthrax is varied. The mortality rate is high in children with inhalational anthrax, gastrointestinal anthrax, and anthrax meningoencephalitis. Rapid diagnosis and effective treatment of anthrax in children requires recognition of the broad spectrum of clinical presentations of pediatric anthrax.

    View details for Web of Science ID 000249156800013

    View details for PubMedID 17768291

  • Why rescue the administrative data version of the "failure to rescue" quality indicator MEDICAL CARE McDonald, K. M., Davies, S. M., Geppert, J., Romano, P. S. 2007; 45 (4): 277-279

    View details for Web of Science ID 000245701600001

    View details for PubMedID 17496708

  • Quality improvement strategies for type 2 diabetes - Reply JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION Shojania, K. G., Ranji, S. R., McDonald, K. M., Grimshaw, J. M., Rushakoff, R. J., Owens, D. K. 2006; 296 (22): 2681-2681
  • Pediatric anthrax: implications for bioterrorism preparedness. Evidence report/technology assessment Bravata, D. M., Wang, E., Holty, J., Lewis, R., Wise, P. H., Nayak, S., Liu, H., McDonald, K. M., Owens, D. K. 2006: 1-48

    Abstract

    To systematically review the literature about children with anthrax to describe their clinical course, treatment responses, and the predictors of disease progression and mortality.MEDLINE (1966-2005), 14 selected journal indexes (1900-1966) and bibliographies of all retrieved articles.We sought case reports of pediatric anthrax published between 1900 and 2005 meeting predefined criteria. We abstracted three types of data from the English-language reports: (1) Patient information (e.g., age, gender, nationality), (2) symptom and disease progression information (e.g., whether the patient developed meningitis); (3) treatment information (e.g., treatments received, year of treatment). We compared the clinical symptoms and disease progression variables for the pediatric cases with data on adult anthrax cases reviewed previously.We identified 246 titles of potentially relevant articles from our MEDLINE(R) search and 2253 additional references from our manual search of the bibliographies of retrieved articles and the indexes of the 14 selected journals. We included 62 case reports of pediatric anthrax including two inhalational cases, 20 gastrointestinal cases, 37 cutaneous cases, and three atypical cases. Anthrax is a relatively common and historically well-recognized disease and yet rarely reported among children, suggesting the possibility of significant under-diagnosis, underreporting, and/or publication bias. Children with anthrax present with a wide range of clinical signs and symptoms, which differ somewhat from the presenting features of adults with anthrax. Like adults, children with gastrointestinal anthrax have two distinct clinical presentations: Upper tract disease characterized by dysphagia and oropharyngeal findings and lower tract disease characterized by fever, abdominal pain, and nausea and vomiting. Additionally, children with inhalational disease may have "atypical" presentations including primary meningoencephalitis. Children with inhalational anthrax have abnormal chest roentgenograms; however, children with other forms of anthrax usually have normal roentgenograms. Nineteen of the 30 children (63%) who received penicillin-based antibiotics survived; whereas nine of 11 children (82%) who received anthrax antiserum survived.There is a broad spectrum of clinical signs and symptoms associated with pediatric anthrax. The limited data available regarding disease progression and treatment responses for children infected with anthrax suggest some differences from adult populations. Preparedness planning efforts should specifically address the needs of pediatric victims.

    View details for PubMedID 17764208

  • Effects of quality improvement strategies for type 2 diabetes on glycemic control - A meta-regression analysis JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION Shojania, K. G., Ranji, S. R., McDonald, K. M., Grimshaw, J. M., Sundaram, V., Rushakoff, R. J., Owens, D. K. 2006; 296 (4): 427-440

    Abstract

    There have been numerous reports of interventions designed to improve the care of patients with diabetes, but the effectiveness of such interventions is unclear.To assess the impact on glycemic control of 11 distinct strategies for quality improvement (QI) in adults with type 2 diabetes.MEDLINE (1966-April 2006) and the Cochrane Collaboration's Effective Practice and Organisation of Care Group database, which covers multiple bibliographic databases. Eligible studies included randomized or quasi-randomized controlled trials and controlled before-after studies that evaluated a QI intervention targeting some aspect of clinician behavior or organizational change and reported changes in glycosylated hemoglobin (HbA1c) values.Postintervention difference in HbA1c values were estimated using a meta-regression model that included baseline glycemic control and other key intervention and study features as predictors.Fifty randomized controlled trials, 3 quasi-randomized trials, and 13 controlled before-after trials met all inclusion criteria. Across these 66 trials, interventions reduced HbA(1c) values by a mean of 0.42% (95% confidence interval [CI], 0.29%-0.54%) over a median of 13 months of follow-up. Trials with fewer patients than the median for all included trials reported significantly greater effects than did larger trials (0.61% vs 0.27%, P = .004), strongly suggesting publication bias. Trials with mean baseline HbA1c values of 8.0% or greater also reported significantly larger effects (0.54% vs 0.20%, P = .005). Adjusting for these effects, 2 of the 11 categories of QI strategies were associated with reductions in HbA(1c) values of at least 0.50%: team changes (0.67%; 95% CI, 0.43%-0.91%; n = 26 trials) and case management (0.52%; 95% CI, 0.31%-0.73%; n = 26 trials); these also represented the only 2 strategies conferring significant incremental reductions in HbA1c values. Interventions involving team changes reduced values by 0.33% more (95% CI, 0.12%-0.54%; P = .004) than those without this strategy, and those involving case management reduced values by 0.22% more (95% CI, 0.00%-0.44%; P = .04) than those without case management. Interventions in which nurse or pharmacist case managers could make medication adjustments without awaiting physician authorization reduced values by 0.80% (95% CI, 0.51%-1.10%), vs only 0.32% (95% CI, 0.14%-0.49%) for all other interventions (P = .002).Most QI strategies produced small to modest improvements in glycemic control. Team changes and case management showed more robust improvements, especially for interventions in which case managers could adjust medications without awaiting physician approval. Estimates of the effectiveness of other specific QI strategies may have been limited by difficulty in classifying complex interventions, insufficient numbers of studies, and publication bias.

    View details for Web of Science ID 000239242500029

    View details for PubMedID 16868301

  • Quality improvement strategies for hypertension management - A systematic review MEDICAL CARE Walsh, J. M., McDonald, K. M., Shojania, K. G., Sundaram, V., Nayak, S., Lewis, R., Owens, D. K., Goldstein, M. K. 2006; 44 (7): 646-657

    Abstract

    Care remains suboptimal for many patients with hypertension.The purpose of this study was to assess the effectiveness of quality improvement (QI) strategies in lowering blood pressure.MEDLINE, Cochrane databases, and article bibliographies were searched for this study.Trials, controlled before-after studies, and interrupted time series evaluating QI interventions targeting hypertension control and reporting blood pressure outcomes were studied.Two reviewers abstracted data and classified QI strategies into categories: provider education, provider reminders, facilitated relay of clinical information, patient education, self-management, patient reminders, audit and feedback, team change, or financial incentives were extracted.Forty-four articles reporting 57 comparisons underwent quantitative analysis. Patients in the intervention groups experienced median reductions in systolic blood pressure (SBP) and diastolic blood pressure (DBP) that were 4.5 mm Hg (interquartile range [IQR]: 1.5 to 11.0) and 2.1 mm Hg (IQR: -0.2 to 5.0) greater than observed for control patients. Median increases in the percentage of individuals achieving target goals for SBP and DBP were 16.2% (IQR: 10.3 to 32.2) and 6.0% (IQR: 1.5 to 17.5). Interventions that included team change as a QI strategy were associated with the largest reductions in blood pressure outcomes. All team change studies included assignment of some responsibilities to a health professional other than the patient's physician.Not all QI strategies have been assessed equally, which limits the power to compare differences in effects between strategies.QI strategies are associated with improved hypertension control. A focus on hypertension by someone in addition to the patient's physician was associated with substantial improvement. Future research should examine the contributions of individual QI strategies and their relative costs.

    View details for Web of Science ID 000238806300006

    View details for PubMedID 16799359

  • Systematic review: A century of inhalational anthrax cases from 1900 to 2005 ANNALS OF INTERNAL MEDICINE Holty, J. E., Bravata, D. M., Liu, H., Olshen, R. A., McDonald, K. M., Owens, D. K. 2006; 144 (4): 270-280

    Abstract

    Mortality from inhalational anthrax during the 2001 U.S. attack was substantially lower than that reported historically.To systematically review all published inhalational anthrax case reports to evaluate the predictors of disease progression and mortality.MEDLINE (1966-2005), 14 selected journal indexes (1900-1966), and bibliographies of all retrieved articles.Case reports (in any language) between 1900 and 2005 that met predefined criteria.Two authors (1 author for non-English-language reports) independently abstracted patient data.The authors found 106 reports of 82 cases of inhalational anthrax. Mortality was statistically significantly lower for patients receiving antibiotics or anthrax antiserum during the prodromal phase of disease, multidrug antibiotic regimens, or pleural fluid drainage. Patients in the 2001 U.S. attack were less likely to die than historical anthrax case-patients (45% vs. 92%; P < 0.001) and were more likely to receive antibiotics during the prodromal phase (64% vs. 13%; P < 0.001), multidrug regimens (91% vs. 50%; P = 0.027), or pleural fluid drainage (73% vs. 11%; P < 0.001). Patients who progressed to the fulminant phase had a mortality rate of 97% (regardless of the treatment they received), and all patients with anthrax meningoencephalitis died.This was a retrospective case review of previously published heterogeneous reports.Despite advances in supportive care, fulminant-phase inhalational anthrax is usually fatal. Initiation of antibiotic or anthrax antiserum therapy during the prodromal phase is associated with markedly improved survival, although other aspects of care, differences in clinical circumstances, or unreported factors may contribute to this observed reduction in mortality. Efforts to improve early diagnosis and timely initiation of appropriate antibiotics are critical to reducing mortality.

    View details for Web of Science ID 000235543100006

    View details for PubMedID 16490913

  • Reducing mortality from anthrax bioterrorism: Strategies for stockpiling and dispensing medical and pharmaceutical supplies BIOSECURITY AND BIOTERRORISM-BIODEFENSE STRATEGY PRACTICE AND SCIENCE Bravata, D. M., Zaric, G. S., Holty, J. C., Brandeau, M. L., Wilhelm, E. R., McDonald, K. M., Owens, D. K. 2006; 4 (3): 244-262

    Abstract

    A critical question in planning a response to bioterrorism is how antibiotics and medical supplies should be stockpiled and dispensed. The objective of this work was to evaluate the costs and benefits of alternative strategies for maintaining and dispensing local and regional inventories of antibiotics and medical supplies for responses to anthrax bioterrorism. We modeled the regional and local supply chain for antibiotics and medical supplies as well as local dispensing capacity. We found that mortality was highly dependent on the local dispensing capacity, the number of individuals requiring prophylaxis, adherence to prophylactic antibiotics, and delays in attack detection. For an attack exposing 250,000 people and requiring the prophylaxis of 5 million people, expected mortality fell from 243,000 to 145,000 as the dispensing capacity increased from 14,000 to 420,000 individuals per day. At low dispensing capacities (<14,000 individuals per day), nearly all exposed individuals died, regardless of the rate of adherence to prophylaxis, delays in attack detection, or availability of local inventories. No benefit was achieved by doubling local inventories at low dispensing capacities; however, at higher dispensing capacities, the cost-effectiveness of doubling local inventories fell from 100,000 US dollars to 20,000 US dollars/life year gained as the annual probability of an attack increased from 0.0002 to 0.001. We conclude that because of the reportedly rapid availability of regional inventories, the critical determinant of mortality following anthrax bioterrorism is local dispensing capacity. Bioterrorism preparedness efforts directed at improving local dispensing capacity are required before benefits can be reaped from enhancing local inventories.

    View details for Web of Science ID 000240714200010

    View details for PubMedID 16999586

  • Overestimation of clinical diagnostic performance caused by low necropsy rates QUALITY & SAFETY IN HEALTH CARE Shojania, K. G., Burton, E. C., McDonald, K. M., Goldman, L. 2005; 14 (6): 408-413

    Abstract

    Diagnostic sensitivity is calculated as the number of correct diagnoses divided by the sum of correct diagnoses plus the number of missed or false negative diagnoses. Because missed diagnoses are generally detected during clinical follow up or at necropsy, the low necropsy rates seen in current practice may result in overestimates of diagnostic performance. Using three target conditions (aortic dissection, pulmonary embolism, and active tuberculosis), the prevalence of clinically missed cases among necropsied and non-necropsied deaths was estimated and the impact of low necropsy rates on the apparent sensitivity of antemortem diagnosis determined.After reviewing case series for each target condition, the most recent study that included cases first detected at necropsy was selected and the reported sensitivity of clinical diagnosis adjusted by estimating the total number of cases that would have been detected had all decedents undergone necropsy. These estimates were based on available data for necropsy rates, time period, country (US v non-US), and case mix.For all three target diagnoses, adjusting for the estimated prevalence of clinically missed cases among non-necropsied deaths produced sensitivity values outside the 95% confidence interval for the originally reported values, and well below sensitivities reported for the diagnostic tests that are usually used to detect these conditions. For active tuberculosis the sensitivity of antemortem diagnosis decreased from an apparent value of 96% to a corrected value of 83%, with a plausible range of 42-91%; for aortic dissection the sensitivity decreased from 86% to 74%; and for pulmonary embolism the reduction fell only modestly from 97% to 91% but was still lower than generally reported values of 98% or more.Failure to adjust for the prevalence of missed cases among non-necropsied deaths may substantially overstate the performance of diagnostic tests and antemortem diagnosis in general, especially for conditions with high early case fatality.

    View details for DOI 10.1136/qshc.2004.011973

    View details for Web of Science ID 000233686400005

    View details for PubMedID 16326784

  • Challenges in systematic reviews: Synthesis of topics related to the delivery, organization, and financing of health care ANNALS OF INTERNAL MEDICINE Bravata, D. M., McDonald, K. M., Shojania, K. G., Sundaram, V., Owens, D. K. 2005; 142 (12): 1056-1065

    Abstract

    Some important health policy topics, such as those related to the delivery, organization, and financing of health care, present substantial challenges to established methods for evidence synthesis. For example, such reviews may ask: What is the effect of for-profit versus not-for-profit delivery of care on patient outcomes? Or, which strategies are the most effective for promoting preventive care? This paper describes innovative methods for synthesizing evidence related to the delivery, organization, and financing of health care. We found 13 systematic reviews on these topics that described novel methodologic approaches. Several of these syntheses used 3 approaches: conceptual frameworks to inform problem formulation, systematic searches that included nontraditional literature sources, and hybrid synthesis methods that included simulations to address key gaps in the literature. As the primary literature on these topics expands, so will opportunities to develop additional novel methods for performing high-quality comprehensive syntheses.

    View details for Web of Science ID 000229977600005

    View details for PubMedID 15968030

  • Impacts of informal caregiver availability on long-term care expenditures in OECD countries HEALTH SERVICES RESEARCH Yoo, B. K., Bhattacharya, J., McDonald, K. M., Garber, A. M. 2004; 39 (6): 1971-1995

    Abstract

    To quantify the effects of informal caregiver availability and public funding on formal long-term care (LTC) expenditures in developed countries.Secondary data were acquired for 15 Organization for Economic Cooperation and Development (OECD) countries from 1970 to 2000.Secondary data analysis, applying fixed- and random-effects models to time-series cross-sectional data. Outcome variables are inpatient or home heath LTC expenditures. Key explanatory variables are measures of the availability of informal caregivers, generosity in public funding for formal LTC, and the proportion of the elderly population in the total population.Aggregated macro data were obtained from OECD Health Data, United Nations Demographic Yearbooks, and U.S. Census Bureau International Data Base.Most of the 15 OECD countries experienced growth in LTC expenditures over the study period. The availability of a spouse caregiver, measured by male-to-female ratio among the elderly, is associated with a $28,840 (1995 U.S. dollars) annual reduction in formal LTC expenditure per additional elderly male. Availability of an adult child caregiver, measured by female labor force participation and full-time/part-time status shift, is associated with a reduction of $310 to $3,830 in LTC expenditures. These impacts on LTC expenditure vary across countries and across time within a country.The availability of an informal caregiver, particularly a spouse caregiver, is among the most important factors explaining variation in LTC expenditure growth. Long-term care policies should take into account behavioral responses: decreased public funding in LTC may lead working women to leave the labor force to provide more informal care.

    View details for Web of Science ID 000226743500004

    View details for PubMedID 15544640

  • Systematic review: Surveillance systems for early detection of bioterrorism-related diseases ANNALS OF INTERNAL MEDICINE Bravata, D. M., McDonald, K. M., Smith, W. M., Rydzak, C., Szeto, H., Buckeridge, D. L., Haberland, C., Owens, D. K. 2004; 140 (11): 910-922

    Abstract

    Given the threat of bioterrorism and the increasing availability of electronic data for surveillance, surveillance systems for the early detection of illnesses and syndromes potentially related to bioterrorism have proliferated.To critically evaluate the potential utility of existing surveillance systems for illnesses and syndromes related to bioterrorism.Databases of peer-reviewed articles (for example, MEDLINE for articles published from January 1985 to April 2002) and Web sites of relevant government and nongovernment agencies.Reports that described or evaluated systems for collecting, analyzing, or presenting surveillance data for bioterrorism-related illnesses or syndromes.From each included article, the authors abstracted information about the type of surveillance data collected; method of collection, analysis, and presentation of surveillance data; and outcomes of evaluations of the system.17,510 article citations and 8088 government and nongovernmental Web sites were reviewed. From these, the authors included 115 systems that collect various surveillance reports, including 9 syndromic surveillance systems, 20 systems collecting bioterrorism detector data, 13 systems collecting influenza-related data, and 23 systems collecting laboratory and antimicrobial resistance data. Only the systems collecting syndromic surveillance data and detection system data were designed, at least in part, for bioterrorism preparedness applications. Syndromic surveillance systems have been deployed for both event-based and continuous bioterrorism surveillance. Few surveillance systems have been comprehensively evaluated. Only 3 systems have had both sensitivity and specificity evaluated.Data from some existing surveillance systems (particularly those developed by the military) may not be publicly available.Few surveillance systems have been specifically designed for collecting and analyzing data for the early detection of a bioterrorist event. Because current evaluations of surveillance systems for detecting bioterrorism and emerging infections are insufficient to characterize the timeliness or sensitivity and specificity, clinical and public health decision making based on these systems may be compromised.

    View details for Web of Science ID 000221680600008

    View details for PubMedID 15172906

  • Regionalization of bioterrorism preparedness and response. Evidence report/technology assessment (Summary) Bravata, D. M., McDonald, K. M., Owens, D. K., Wilhelm, E. R., Brandeau, M. L., Zaric, G. S., Holty, J. E., Liu, H., Sundaram, V. 2004: 1-7

    View details for PubMedID 15133889

  • A conceptual framework for evaluating information technologies and decision support systems for bioterrorism preparedness and response MEDICAL DECISION MAKING Bravata, D. M., McDonald, K. M., Szeto, H., Smith, W. M., Rydzak, C., Owens, D. K. 2004; 24 (2): 192-206

    Abstract

    The authors sought to develop a conceptual framework for evaluating whether existing information technologies and decision support systems (IT/DSSs) would assist the key decisions faced by clinicians and public health officials preparing for and responding to bioterrorism.They reviewed reports of natural and bioterrorism related infectious outbreaks, bioterrorism preparedness exercises, and advice from experts to identify the key decisions, tasks, and information needs of clinicians and public health officials during a bioterrorism response. The authors used task decomposition to identify the subtasks and data requirements of IT/DSSs designed to facilitate a bioterrorism response. They used the results of the task decomposition to develop evaluation criteria for IT/DSSs for bioterrorism preparedness. They then applied these evaluation criteria to 341 reports of 217 existing IT/DSSs that could be used to support a bioterrorism response. Main Results: In response to bioterrorism, clinicians must make decisions in 4 critical domains (diagnosis, management, prevention, and reporting to public health), and public health officials must make decisions in 4 other domains (interpretation of bioterrorism surveillance data, outbreak investigation, outbreak control, and communication). The time horizons and utility functions for these decisions differ. From the task decomposition, the authors identified critical subtasks for each of the 8 decisions. For example, interpretation of diagnostic tests is an important subtask of diagnostic decision making that requires an understanding of the tests' sensitivity and specificity. Therefore, an evaluation criterion applied to reports of diagnostic IT/DSSs for bioterrorism asked whether the reports described the systems' sensitivity and specificity. Of the 217 existing IT/DSSs that could be used to respond to bioterrorism, 79 studies evaluated 58 systems for at least 1 performance metric.The authors identified 8 key decisions that clinicians and public health officials must make in response to bioterrorism. When applying the evaluation system to 217 currently available IT/DSSs that could potentially support the decisions of clinicians and public health officials, the authors found that the literature provides little information about the accuracy of these systems.

    View details for DOI 10.1177/0272989X04263254

    View details for Web of Science ID 000220392600008

    View details for PubMedID 15090105

  • Evaluating detection and diagnostic decision support systems for bioterrorism response EMERGING INFECTIOUS DISEASES Bravata, D. M., Sundaram, V., McDonald, K. M., Smith, W. M., Szeto, H., Schleinitz, M. D., Owens, D. K. 2004; 10 (1): 100-108

    Abstract

    We evaluated the usefulness of detection systems and diagnostic decision support systems for bioterrorism response. We performed a systematic review by searching relevant databases (e.g., MEDLINE) and Web sites for reports of detection systems and diagnostic decision support systems that could be used during bioterrorism responses. We reviewed over 24,000 citations and identified 55 detection systems and 23 diagnostic decision support systems. Only 35 systems have been evaluated: 4 reported both sensitivity and specificity, 13 were compared to a reference standard, and 31 were evaluated for their timeliness. Most evaluations of detection systems and some evaluations of diagnostic systems for bioterrorism responses are critically deficient. Because false-positive and false-negative rates are unknown for most systems, decision making on the basis of these systems is seriously compromised. We describe a framework for the design of future evaluations of such systems.

    View details for Web of Science ID 000187962800016

    View details for PubMedID 15078604

  • Changes in rates of autopsy-detected diagnostic errors over time - A systematic review JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION Shojania, K. G., Burton, E. G., McDonald, K. M., Goldman, L. 2003; 289 (21): 2849-2856

    Abstract

    Substantial discrepanies exist between clinical diagnoses and findings at autopsy. Autopsy may be used as a tool for quality management to analyze diagnostic discrepanies.To determine the rate at which autopsies detect important, clinically missed diagnoses, and the extent to which this rate has changed over time.A systematic literature search for English-language articles available on MEDLINE from 1966 to April 2002, using the search terms autopsy, postmortem changes, post-mortem, postmortem, necropsy, and posthumous, identified 45 studies reporting 53 distinct autopsy series meeting prospectively defined criteria. Reference lists were reviewed to identify additional studies, and the final bibliography was distributed to experts in the field to identify missing or unpublished studies.Included studies reported clinically missed diagnoses involving a primary cause of death (major errors), with the most serious being those likely to have affected patient outcome (class I errors).Logistic regression was performed using data from 53 distinct autopsy series over a 40-year period and adjusting for the effects of changes in autopsy rates, country, case mix (general autopsies; adult medical; adult intensive care; adult or pediatric surgery; general pediatrics or pediatric inpatients; neonatal or pediatric intensive care; and other autopsy), and important methodological features of the primary studies.Of 53 autopsy series identified, 42 reported major errors and 37 reported class I errors. Twenty-six autopsy series reported both major and class I error rates. The median error rate was 23.5% (range, 4.1%-49.8%) for major errors and 9.0% (range, 0%-20.7%) for class I errors. Analyses of diagnostic error rates adjusting for the effects of case mix, country, and autopsy rate yielded relative decreases per decade of 19.4% (95% confidence interval [CI], 1.8%-33.8%) for major errors and 33.4% (95% [CI], 8.4%-51.6%) for class I errors. Despite these decreases, we estimated that a contemporary US institution (based on autopsy rates ranging from 100% [the extrapolated extreme at which clinical selection is eliminated] to 5% [roughly the national average]), could observe a major error rate from 8.4% to 24.4% and a class I error rate from 4.1% to 6.7%.The possibility that a given autopsy will reveal important unsuspected diagnoses has decreased over time, but remains sufficiently high that encouraging ongoing use of the autopsy appears warranted.

    View details for Web of Science ID 000183205500034

    View details for PubMedID 12783916

  • A national profile of patient safety in US hospitals HEALTH AFFAIRS Romano, P. S., Geppert, J. J., Davies, S., Miller, M. R., Elixhauser, A., McDonald, K. M. 2003; 22 (2): 154-166

    Abstract

    Measures based on routinely collected data would be useful to examine the epidemiology of patient safety. Extending previous work, we established the face and consensual validity of twenty Patient Safety Indicators (PSIs). We generated a national profile of patient safety by applying these PSIs to the HCUP Nationwide Inpatient Sample. The incidence of most nonobstetric PSIs increased with age and was higher among African Americans than among whites. The adjusted incidence of most PSIs was highest at urban teaching hospitals. The PSIs may be used in AHRQ's National Quality Report, while providers may use them to screen for preventable complications, target opportunities for improvement, and benchmark performance.

    View details for Web of Science ID 000181450400025

    View details for PubMedID 12674418

  • The autopsy as an outcome and performance measure. Evidence report/technology assessment (Summary) Shojania, K. G., Burton, E. C., McDonald, K. M., Goldman, L. 2002: 1-5

    View details for PubMedID 12467146

  • Utilization and outcomes of the implantable cardioverter defibrillator, 1987 to 1995 AMERICAN HEART JOURNAL Hlatky, M. A., Saynina, O., McDonald, K. M., Garber, A. M., McClellan, M. B. 2002; 144 (3): 397-403

    Abstract

    The patterns of adoption of the implantable cardioverter defibrillator (ICD) and the outcomes of its use have not been well documented in general, unselected populations. The purpose of this study was to document the impact of the ICD in widespread clinical practice.We identified ICD recipients by use of the hospital discharge databases of Medicare beneficiaries for 1987 through 1995 and of California residents for 1991 through 1995. The index admission for each patient was linked to previous and subsequent admissions and to mortality files to create a longitudinal patient profile.The rate of ICD implantations increased >10-fold between 1987 and 1995, as both the number of hospitals performing the procedure and the volume of ICD implantations per hospital rose. Mortality rates within 30 days of ICD implantation decreased from 6.0% to 1.9%, and mortality rates within 1 year fell from 19.3% to 11.4%. Surgical interventions to revise or replace the ICD within the first year remained about 5%, however, and cumulative expenditures at 1 year ($46,000-$51,000) changed very little. ICD implantation rates varied >3-fold among different regions of the United States.ICD use has expanded markedly during the study period, with improved mortality rates, but medical expenditures and rates of surgical revision remain high for ICD recipients.

    View details for DOI 10.1067/mhj.2002.125496

    View details for Web of Science ID 000178086800006

    View details for PubMedID 12228775

  • Effect of risk stratification on cost-effectiveness of the implantable cardioverter defibrillator AMERICAN HEART JOURNAL Owens, D. K., Sanders, G. D., Heidenreich, P. A., McDonald, K. M., Hlatky, M. A. 2002; 144 (3): 440-448

    Abstract

    Implantable cardioverter defibrillators (ICDs) effectively prevent sudden cardiac death, but selection of appropriate patients for implantation is complex. We evaluated whether risk stratification based on risk of sudden cardiac death alone was sufficient to predict the effectiveness and cost-effectiveness of the ICD.We developed a Markov model to evaluate the cost-effectiveness of ICD implantation compared with empiric amiodarone treatment. The model incorporated mortality rates from sudden and nonsudden cardiac death, noncardiac death and costs for each treatment strategy. We based our model inputs on data from randomized clinical trials, registries, and meta-analyses. We assumed that the ICD reduced total mortality rates by 25%, relative to use of amiodarone.The relationship between cost-effectiveness of the ICD and the total annual cardiac mortality rate is U-shaped; cost-effectiveness becomes unfavorable at both low and high total cardiac mortality rates. If the annual total cardiac mortality rate is 12%, the cost-effectiveness of the ICD varies from $36,000 per quality-adjusted life-year (QALY) gained when the ratio of sudden cardiac death to nonsudden cardiac death is 4 to $116,000 per QALY gained when the ratio is 0.25.The cost-effectiveness of ICD use relative to amiodarone depends on total cardiac mortality rates as well as the ratio of sudden to nonsudden cardiac death. Studies of candidate diagnostic tests for risk stratification should distinguish patients who die suddenly from those who die nonsuddenly, not just patients who die suddenly from those who live.

    View details for DOI 10.1067/mhj.2002.125501

    View details for Web of Science ID 000178086800011

    View details for PubMedID 12228780

  • Risk of sudden versus nonsudden cardiac death in patients with coronary artery disease AMERICAN HEART JOURNAL Every, N., Hallstrom, A., McDonald, K. M., Parsons, L., Thom, D., Weaver, D., Hlatky, M. A. 2002; 144 (3): 390-396

    Abstract

    Patients at high risk of sudden cardiac death, yet at low risk of nonsudden death, might be ideal candidates for antiarrhythmic drugs or devices. Most previous studies of prognostic markers for sudden cardiac death have ignored the competitive risk of nonsudden cardiac death. The goal of the present study was to evaluate the ability of clinical factors to distinguish the risks of sudden and nonsudden cardiac death.We identified all deaths during a 3.3-year follow-up of 30,680 patients discharged alive after admission to the cardiac care unit of a Seattle hospital. Detailed chart reviews were conducted on 1093 subsequent out-of-hospital sudden deaths, 973 nonsudden cardiac deaths, and 442 randomly selected control patients.Patients who died in follow-up (suddenly or nonsuddenly) were significantly different for many clinical factors from control patients. In contrast, patients with sudden cardiac death were insignificantly different for most clinical characteristics from patients with nonsudden cardiac death. The mode of death was 20% to 30% less likely to be sudden in women, patients who had angioplasty or bypass surgery, and patients prescribed beta-blockers. The mode of death was 20% to 30% more likely to be sudden in patients with heart failure, frequent ventricular ectopy, or a discharge diagnosis of acute myocardial infarction. A multivariable model had only modest predictive capacity for mode of death (c-index of 0.62).Standard clinical evaluation is much better at predicting overall risk of death than at predicting the mode of death as sudden or nonsudden.

    View details for DOI 10.1067/mhj.2002.125495

    View details for Web of Science ID 000178086800005

    View details for PubMedID 12228774

  • Management of ventricular arrhythmias in diverse populations in California AMERICAN HEART JOURNAL Alexander, M., Baker, L., Clark, C., McDonald, K. M., Rowell, R., Saynina, O., Hlatky, M. A. 2002; 144 (3): 431-439

    Abstract

    The use of coronary angiography and revascularization is lower than expected among black patients. It is uncertain whether use of other cardiac procedures also varies according to race and ethnicity and whether outcomes are affected.We analyzed discharge abstracts from all nonfederal hospitals in California of patients hospitalized for a primary diagnosis of ventricular tachycardia or ventricular fibrillation between 1992 and 1994. We compared mortality rates and use of electrophysiologic study (EPS) and implantable cardioverter-defibrillator (ICD) procedures according to the race and ethnicity of the patient.Among 8713 patients admitted with ventricular tachycardia or ventricular fibrillation, 29% (n = 2508) had a subsequent EPS procedure, and 9% (n = 818) had an ICD implanted. After controlling for potential confounding factors, we found that black patients were significantly less likely than white patients to undergo EPS (odds ratio 0.72, CI 0.56-0.92) or ICD implantation (odds ratio 0.39, CI 0.25-0.60). Blacks discharged alive from the initial hospital admission had higher mortality rates over the next year than white patients, even after controlling for multiple confounding risk factors (risk ratio 1.18, CI 1.03-1.36). The use of EPS and ICD procedures was also significantly affected by several other factors, most notably by on-site procedure availability but also by age, sex, and insurance status.In a large population of patients hospitalized for ventricular arrhythmia, blacks had significantly lower rates of utilization for EPS and ICD procedures and higher subsequent mortality rates.

    View details for DOI 10.1067/mhj.2002.125500

    View details for Web of Science ID 000178086800010

    View details for PubMedID 12228779

  • Overview of randomized trials of antiarrhythmic drugs and devices for the prevention of sudden cardiac death AMERICAN HEART JOURNAL Heidenreich, P. A., Keeffe, B., McDonald, K. M., Hlatky, M. A. 2002; 144 (3): 422-430

    Abstract

    Sudden cardiac death is a prominent feature of the natural history of heart disease. The efficacy of antiarrhythmic drugs and devices in preventing sudden death and reducing total mortality is uncertain.We reviewed randomized trials and quantitative overviews of type I and type III antiarrhythmic drugs. We also reviewed the randomized trials of implantable cardioverter defibrillators and combined these outcomes in a quantitative overview.Randomized trials of type I antiarrhythmic agents used as secondary prevention after myocardial infarction show an overall 21% increase in mortality rate. Randomized trials of amiodarone suggest a 13% to 19% decrease in mortality rate, and sotalol has been effective in several small trials. Trials of pure type III agents, however, have shown no mortality benefit. An overview of implantable defibrillator trials shows a 24% reduction in mortality rate (CI 15%-33%) compared with alternative therapy, most often amiodarone.Amiodarone is effective in reducing the total mortality rate by 13% to 19%, and the implantable defibrillator reduces the mortality rate by a further 24%.

    View details for DOI 10.1067/mhj.2002.125499

    View details for Web of Science ID 000178086800009

    View details for PubMedID 12228778

  • Trends in hospital treatment of ventricular arrhythmias among Medicare beneficiaries, 1985 to 1995 AMERICAN HEART JOURNAL McDonald, K. M., Hlatky, M. A., Saynina, O., Geppert, J., Garber, A. M., McClellan, M. B. 2002; 144 (3): 413-421

    Abstract

    Treatment options for patients with ventricular arrhythmias have undergone major changes in the last 2 decades. Trends in use of invasive procedures, clinical outcomes, and expenditures have not been well documented.We used administrative databases of Medicare beneficiaries from 1985 to 1995 to identify patients hospitalized with ventricular arrhythmias. We created a longitudinal patient profile by linking the index admission with all earlier and subsequent admissions and with death records.Approximately 85,000 patients aged > or =65 years went to hospitals in the United States with ventricular arrhythmias each year, and about 20,000 lived to admission. From 1987 to 1995, the use of electrophysiology studies and implantable cardioverter defibrillators in patients who were hospitalized grew substantially, from 3% to 22% and from 1% to 13%, respectively. Hospital expenditures rose 8% per year, primarily because of the increased use of invasive procedures. Survival improved, particularly in the medium term, with 1-year survival rates increasing between 1987 and 1994 from 52.9% to 58.3%, or half a percentage point each year.Survival of patients who sustain a ventricular arrhythmia is poor, but improving. For patients who are admitted, more intensive treatment has been accompanied by increased hospital expenditures.

    View details for DOI 10.1067/mhj.2002.125498

    View details for Web of Science ID 000178086800008

    View details for PubMedID 12228777

  • Life after a ventricular arrhythmia AMERICAN HEART JOURNAL Hsu, J., Uratsu, C., Truman, A., Quesenberry, C., McDonald, K. M., Hlatky, M. A., Selby, J. 2002; 144 (3): 404-412

    Abstract

    There are few data from community-based evaluations of outcomes after a life-threatening ventricular arrhythmia (LTVA). We evaluated patients' quality of life (QOL) and medical costs after hospitalization and treatment for their first episode of an LTVA.We prospectively evaluated QOL by use of the Duke Activity Status Index (DASI), Medical Outcomes Study SF-36 mental health and vitality scales, the Cardiac Arrhythmia Suppression Trial (CAST) symptom scale, and resource use in patients discharged after a first episode of an LTVA in a managed care population of 2.4 million members.We enrolled 264 subjects with new cases of LTVA. Although functional status initially decreased compared with self-reports of pre-event functional status, both functional status and symptom levels improved significantly during the study period. These improvements were greater in patients receiving an implantable cardioverter defibrillator (ICD) than in patients receiving amiodarone. Ratings of mental health and vitality were not significantly different between the treatment groups and did not change significantly during follow-up. The total 2-year medical costs were higher for patients receiving an ICD than for patients receiving amiodarone, despite lower costs during the follow-up period for the patients receiving an ICD.New onset of an LTVA has a substantial negative initial impact on QOL. With therapy, most patients have improvements in their QOL and symptom level, possibly more so after treatment with an ICD. The costs of treating these patients are very high.

    View details for DOI 10.1037/mhj.2002.125497

    View details for Web of Science ID 000178086800007

    View details for PubMedID 12228776

  • Safe but sound - Patient safety meets evidence-based medicine JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION Shojania, K. G., Duncan, B. W., McDonald, K. M., Wachter, R. M. 2002; 288 (4): 508-513

    View details for Web of Science ID 000176955400037

    View details for PubMedID 12132985

  • Bioterrorism preparedness and response: use of information technologies and decision support systems. Evidence report/technology assessment (Summary) Bravata, D. M., McDonald, K., Owens, D. K., Buckeridge, D., Haberland, C., Rydzak, C., Schleinitz, M., Smith, W. M., Szeto, H., Wilkening, D., Musen, M., Duncan, B. W., Nouri, B., Dangiolo, M. B., Liu, H., Shofer, S., Graham, J., Davies, S. 2002: 1-8

    View details for PubMedID 12154489

  • Effectiveness and cost-effectiveness of implantable cardioverter defibrillators in the treatment of ventricular arrhythmias among Medicare beneficiaries AMERICAN JOURNAL OF MEDICINE Weiss, J. P., Saynina, O., McDonald, K. M., McClellan, M. B., Hlatky, M. A. 2002; 112 (7): 519-527

    Abstract

    The implantable cardioverter defibrillator has been assessed in randomized trials, but the generalizability of trial results to broader clinical settings is unclear. Our purpose was to evaluate the outcomes and costs of defibrillator use in an unselected population.We identified 125,892 Medicare patients who were discharged between 1987 and 1995 after hospitalization with a primary diagnosis of ventricular tachycardia or ventricular fibrillation, 7789 of whom (6.2%) received a defibrillator. We used a multivariable propensity score that included patient and hospital characteristics to match pairs of patients, in which one patient received a defibrillator and the other did not. We compared mortality and costs in these 7612 matched pairs during 8 years of follow-up.Patients who received a defibrillator were more likely to be younger, white, male, and urban dwelling, and to have ischemic heart disease, heart failure, or a history of ventricular fibrillation. In the matched-pairs analysis, those who received a defibrillator had significantly lower mortality: 11% versus 19% at 1 year (odds ratio [OR] = 0.57; 95% confidence interval [CI]: 0.51 to 0.63), 20% versus 30% at 2 years (OR = 0.66; 95% CI: 0.60 to 0.72), and 28% versus 39% at 3 years (OR = 0.70; 95% CI: 0.63 to 0.77). These patients also had lower mortality at 8 years (P = 0.0001), although this advantage over patients who received medical treatment only decreased over time. Expenditures among defibrillator recipients were consistently higher, with a cost-effectiveness ratio of $78,400 per life-year gained.The use of implantable defibrillators was associated with significantly lower mortality and higher costs, whereas the cost-effectiveness was higher than many, but not all, generally accepted therapies.

    View details for Web of Science ID 000175594300001

    View details for PubMedID 12015242

  • HIM's role in monitoring patient safety. Journal of AHIMA / American Health Information Management Association Romano, P. S., Elixhauser, A., McDonald, K. M., Miller, M. R. 2002; 73 (3): 72-74

    View details for PubMedID 11905078

  • Potential cost-effectiveness of prophylactic use of the implantable cardioverter defibrillator or amiodarone after myocardial infarction ANNALS OF INTERNAL MEDICINE Sanders, G. D., Hlatky, M. A., Every, N. R., McDonald, K. M., Heidenreich, P. A., Parsons, L. S., Owens, D. K. 2001; 135 (10): 870-883

    Abstract

    Clinical trials have shown that implantable cardioverter defibrillators (ICDs) improve survival in patients with sustained ventricular arrhythmias.To determine the efficacy necessary to make prophylactic ICD or amiodarone therapy cost-effective in patients with myocardial infarction.Markov model-based cost utility analysis.Survival, cardiac death, and inpatient costs were estimated on the basis of the Myocardial Infarction Triage and Intervention registry. Other data were derived from the literature.Patients with past myocardial infarction who did not have sustained ventricular arrhythmia.Lifetime.Societal.ICD or amiodarone compared with no treatment.Life-years, quality-adjusted life-years (QALYs), costs, number needed to treat, and incremental cost-effectiveness.Compared with no treatment, ICD use led to the greatest QALYs and the highest expenditures. Amiodarone use resulted in intermediate QALYs and costs. To obtain acceptable cost-effectiveness thresholds (

    View details for Web of Science ID 000172267500003

    View details for PubMedID 11712877

  • The prognostic value of troponin in patients with non-ST elevation acute coronary syndromes: A meta-analysis JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY Heidenreich, P. A., Alloggiamento, T., Melsop, K., McDonald, K. M., Go, A. S., Hlatky, M. A. 2001; 38 (2): 478-485

    Abstract

    This study was designed to compare the prognostic value of an abnormal troponin level derived from studies of patients with non-ST elevation acute coronary syndromes (ACS).Risk stratification for patients with suspected ACS is important for determining need for hospitalization and intensity of treatment.We identified clinical trials and cohort studies of consecutive patients with suspected ACS without ST-elevation from 1966 through 1999. We excluded studies limited to patients with acute myocardial infarction and studies not reporting mortality or troponin results.Seven clinical trials and 19 cohort studies reported data for 5,360 patients with a troponin T test and 6,603 with a troponin I test. Patients with positive troponin (I or T) had significantly higher mortality than those with a negative test (5.2% vs. 1.6%, odds ratio [OR] 3.1). Cohort studies demonstrated a greater difference in mortality between patients with a positive versus negative troponin I (8.4% vs. 0.7%, OR 8.5) than clinical trials (4.8% if positive, 2.1% if negative, OR 2.6, p = 0.01). Prognostic value of a positive troponin T was also slightly greater for cohort studies (11.6% mortality if positive, 1.7% if negative, OR 5.1) than for clinical trials (3.8% if positive, 1.3% if negative, OR 3.0, p = 0.2)In patients with non-ST elevation ACS, the short-term odds of death are increased three- to eightfold for patients with an abnormal troponin test. Data from clinical trials suggest a lower prognostic value for troponin than do data from cohort studies.

    View details for Web of Science ID 000170205800026

    View details for PubMedID 11499741

  • Technological change around the world: Evidence from heart attack care HEALTH AFFAIRS Anonymous 2001; 20 (3): 25-42

    Abstract

    Although technological change is a hallmark of health care worldwide, relatively little evidence exists on whether changes in health care differ across the very different health care systems of developed countries. We present new comparative evidence on heart attack care in seventeen countries showing that technological change--changes in medical treatments that affect the quality and cost of care--is universal but has differed greatly around the world. Differences in treatment rates are greatest for costly medical technologies, where strict financing limits and other policies to restrict adoption of intensive technologies have been associated with divergences in medical practices over time. Countries appear to differ systematically in the time at which intensive cardiac procedures began to be widely used and in the rate of growth of the procedures. The differences appear to be related to economic and regulatory incentives of the health care systems and may have important economic and health consequences.

    View details for Web of Science ID 000168576800005

    View details for PubMedID 11585174

  • Development and validation of the Ontario acute myocardial infarction mortality prediction rules JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY Tu, J. V., Austin, P. C., Walld, R., Roos, L., Agras, J., McDonald, K. M. 2001; 37 (4): 992-997

    Abstract

    To develop and validate simple statistical models that can be used with hospital discharge administrative databases to predict 30-day and one-year mortality after an acute myocardial infarction (AMI).There is increasing interest in developing AMI "report cards" using population-based hospital discharge databases. However, there is a lack of simple statistical models that can be used to adjust for regional and interinstitutional differences in patient case-mix.We used linked administrative databases on 52,616 patients having an AMI in Ontario, Canada, between 1994 and 1997 to develop logistic regression statistical models to predict 30-day and one-year mortality after an AMI. These models were subsequently validated in two external cohorts of AMI patients derived from administrative datasets from Manitoba, Canada, and California, U.S.The 11-variable Ontario AMI mortality prediction rules accurately predicted mortality with an area under the receiver operating characteristic (ROC) curve of 0.78 for 30-day mortality and 0.79 for one-year mortality in the Ontario dataset from which they were derived. In an independent validation dataset of 4,836 AMI patients from Manitoba, the ROC areas were 0.77 and 0.78, respectively. In a second validation dataset of 112,234 AMI patients from California, the ROC areas were 0.77 and 0.78 respectively.The Ontario AMI mortality prediction rules predict quite accurately 30-day and one-year mortality after an AMI in linked hospital discharge databases of AMI patients from Ontario, Manitoba and California. These models may also be useful to outcomes and quality measurement researchers in other jurisdictions.

    View details for Web of Science ID 000167515700003

    View details for PubMedID 11263626

  • Cost reduction and implantable cardioverter defibrillator implantation JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY McDonald, K. M., Hlatky, M. A. 2001; 12 (2): 167-168

    View details for Web of Science ID 000167027200009

    View details for PubMedID 11232614

  • Cost-effectiveness of radiofrequency ablation for supraventricular tachycardia ANNALS OF INTERNAL MEDICINE Cheng, C. H., Sanders, G. D., Hlatky, M. A., Heidenreich, P., McDonald, K. M., Lee, B. K., Larson, M. S., Owens, D. K. 2000; 133 (11): 864-876

    Abstract

    Radiofrequency ablation is an established but expensive treatment option for many forms of supraventricular tachycardia. Most cases of supraventricular tachycardia are not life-threatening; the goal of therapy is therefore to improve the patient's quality of life.To compare the cost-effectiveness of radiofrequency ablation with that of medical management of supraventricular tachycardia.Markov model.Costs were estimated from a major academic hospital and the literature, and treatment efficacy was estimated from reports from clinical studies at major medical centers. Probabilities of clinical outcomes were estimated from the literature. To account for the effect of radiofrequency ablation on quality of life, assessments by patients who had undergone the procedure were used.Cohort of symptomatic patients who experienced 4.6 unscheduled visits per year to an emergency department or a physician's office while receiving long-term drug therapy for supraventricular tachycardia.Patient lifetime.Societal.Initial radiofrequency ablation, long-term antiarrhythmic drug therapy, and treatment of acute episodes of arrhythmia with antiarrhythmic drugs.Costs, quality-adjusted life-years, life-years, and marginal cost-effectiveness ratios.Among patients who have monthly episodes of supraventricular tachycardia, radiofrequency ablation was the most effective and least expensive therapy and therefore dominated the drug therapy options. Radiofrequency ablation improved quality-adjusted life expectancy by 3.10 quality-adjusted life-years and reduced lifetime medical expenditures by $27 900 compared with long-term drug therapy. Long-term drug therapy was more effective and had lower costs than episodic drug therapy.The findings were highly robust over substantial variations in assumptions about the efficacy and complication rate of radiofrequency ablation, including analyses in which the complication rate was tripled and efficacy was decreased substantially.Radiofrequency ablation substantially improves quality of life and reduces costs when it is used to treat highly symptomatic patients. Although the benefit of radiofrequency ablation has not been studied in less symptomatic patients, a small improvement in quality of life is sufficient to give preference to radiofrequency ablation over drug therapy.

    View details for Web of Science ID 000165585800005

    View details for PubMedID 11103056

  • Prediction of risk for patients with unstable angina. Evidence report/technology assessment (Summary) Heidenreich, P. A., Go, A., Melsop, K. A., Alloggiamento, T., McDonald, K. M., Hagan, V., Hastie, T., Hlatky, M. A. 2000: 1-3

    View details for PubMedID 11013605

  • Clustering and the design of preference-assessment surveys in healthcare HEALTH SERVICES RESEARCH Lin, A., Lenert, L. A., Hlatky, M. A., McDonald, K. M., Olshen, R. A., Hornberger, J. 1999; 34 (5): 1033-1045

    Abstract

    To show cluster analysis as a potentially useful tool in defining common outcomes empirically and in facilitating the assessment of preferences for health states.A survey of 224 patients with ventricular arrhythmias treated at Kaiser Permanente of Northern California.Physical functioning was measured using the Duke Activity Status Index (DASI), and mental status and vitality using the Medical Outcomes Study Short Form-36 items (SF-36). A "k-means" clustering algorithm was used to identify prototypical health states, in which patients in the same cluster shared similar responses to items in the survey.The clustering algorithm yielded four prototypical health states. Cluster 1 (21 percent of patients) was characterized by high scores on physical functioning, vitality, and mental health. Cluster 2 (33 percent of patients) had low physical function but high scores on vitality and mental health. Cluster 3 (29 percent of patients) had low physical function and low vitality but preserved mental health. Cluster 4 (17 percent of patients) had low scores on all scales. These clusters served as the basis of written descriptions of the health states.Employing a clustering algorithm to analyze health status survey data enables researchers to gain a data-driven, concise summary of the experiences of patients.

    View details for Web of Science ID 000084014800006

    View details for PubMedID 10591271

  • An evaluation of beta-blockers, calcium antagonists, nitrates, and alternative therapies for stable angina. Evidence report/technology assessment (Summary) Heidenreich, P. A., McDonald, K. M., Hastie, T., Fadel, B., Hagan, V., Lee, B. K., Hlatky, M. A. 1999: 1-2

    View details for PubMedID 11925969

  • Quality of life before and after radiofrequency catheter ablation in patients with drug refractory atrioventricular nodal reentrant tachycardia AMERICAN JOURNAL OF CARDIOLOGY Larson, M. S., McDonald, K., Young, C., Sung, R., Hlatky, M. A. 1999; 84 (4): 471-?

    Abstract

    In a retrospective survey of 161 highly symptomatic patients, we found significant improvements in symptoms, patient utility, and use of medical care services after radiofrequency ablation for atrioventricular nodal reentrant tachycardia.

    View details for Web of Science ID 000081987400021

    View details for PubMedID 10468092

  • Meta-analysis of trials comparing beta-blockers, calcium antagonists, and nitrates for stable angina JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION Heidenreich, P. A., McDonald, K. M., Hastie, T., Fadel, B., Hagan, V., Lee, B. K., Hlatky, M. A. 1999; 281 (20): 1927-1936

    Abstract

    Which drug is most effective as a first-line treatment for stable angina is not known.To compare the relative efficacy and tolerability of treatment with beta-blockers, calcium antagonists, and long-acting nitrates for patients who have stable angina.We identified English-language studies published between 1966 and 1997 by searching the MEDLINE and EMBASE databases and reviewing the bibliographies of identified articles to locate additional relevant studies.Randomized or crossover studies comparing antianginal drugs from 2 or 3 different classes (beta-blockers, calcium antagonists, and long-acting nitrates) lasting at least 1 week were reviewed. Studies were selected if they reported at least 1 of the following outcomes: cardiac death, myocardial infarction, study withdrawal due to adverse events, angina frequency, nitroglycerin use, or exercise duration. Ninety (63%) of 143 identified studies met the inclusion criteria.Two independent reviewers extracted data from selected articles, settling any differences by consensus. Outcome data were extracted a third time by 1 of the investigators. We combined results using odds ratios (ORs) for discrete data and mean differences for continuous data. Studies of calcium antagonists were grouped by duration and type of drug (nifedipine vs nonnifedipine).Rates of cardiac death and myocardial infarction were not significantly different for treatment with beta-blockers vs calcium antagonists (OR, 0.97; 95% confidence interval [CI], 0.67-1.38; P = .79). There were 0.31 (95% CI, 0.00-0.62; P = .05) fewer episodes of angina per week with beta-blockers than with calcium antagonists. beta-Blockers were discontinued because of adverse events less often than were calcium antagonists (OR, 0.72; 95% CI, 0.60-0.86; P<.001). The differences between beta-blockers and calcium antagonists were most striking for nifedipine (OR for adverse events with beta-blockers vs nifedipine, 0.60; 95% CI, 0.47-0.77). Too few trials compared nitrates with calcium antagonists or beta-blockers to draw firm conclusions about relative efficacy.beta-Blockers provide similar clinical outcomes and are associated with fewer adverse events than calcium antagonists in randomized trials of patients who have stable angina.

    View details for Web of Science ID 000080427300033

    View details for PubMedID 10349897

  • A global analysis of technological change in health care: The case of heart attacks HEALTH AFFAIRS McClellan, M., Kessler, D. 1999; 18 (3): 250-255

    View details for Web of Science ID 000080078600028

    View details for PubMedID 10388222

  • Estimating the proportion of post-myocardial infarction patients who may benefit from prophylactic implantable defibrillator placement from analysis of the CAST Registry AMERICAN JOURNAL OF CARDIOLOGY Every, N. R., Hlatky, M. A., McDonald, K. M., Weaver, W. D., Hallstrom, A. P. 1998; 82 (5): 683-?

    Abstract

    We defined the proportion of post-myocardial infarction patients who would have been eligible for the Multicenter Automatic Defibrillator Implantation Trial (MADIT) from a population of 94,797 patients with myocardial infarction entered into the Cardiac Arrhythmia Suppression Trial Registry. From this large population, only between 0.3% to 1.7% would have met strict eligibility criteria for MADIT.

    View details for Web of Science ID 000075616100028

    View details for PubMedID 9732904

  • Design of a modular, extensible decision support system for arrhythmia therapy JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION Cheng, C. H., Sanders, G. D., McDonald, K. M., Heidenreich, P. A., Hlatky, M. A., Owens, D. K. 1998: 693-697

    Abstract

    We developed a decision-support system for evaluation of treatment alternatives for supraventricular and ventricular arrhythmias. The system uses independent decision models that evaluate the costs and benefits of treatment for recurrent atrioventricular-node reentrant tachycardia (AVNRT), and of therapies to prevent sudden cardiac death (SCD) in patients at risk for life-threatening ventricular arrhythmias. Each of the decision models is accessible through a web-based interface that enables remote users to browse the model's underlying evidence and to perform analyses of effectiveness, cost effectiveness, and sensitivity to input variables. Because the web-based interface is independent of the models, we can extend the functionality of the system by adding decision models. This system illustrates that the use of a library of web-accessible decision models provides decision support economically to widely dispersed users.

    View details for Web of Science ID 000171768600135

    View details for PubMedID 9929308

  • Use and accuracy of state death certificates for classification of sudden cardiac deaths in high-risk populations AMERICAN HEART JOURNAL Every, N. R., Parsons, L., Hlatky, M. A., McDonald, K. M., Thom, D., Hallstrom, A. P., Martin, J. S., Weaver, W. D. 1997; 134 (6): 1129-1132

    Abstract

    In a large cohort of patients with known or suspected coronary disease, we evaluated the characteristics of 407 patients who died after hospital discharge and tested whether the state death certificate can be used to classify deaths as sudden cardiac versus nonsudden. Compared with a paramedic classification system based on heart rhythm, the death certificate-based classification resulted in a sensitivity that ranged from 78% to 85% and a specificity that ranged from 25% to 58%. We conclude that the death certificate can be used to identify cases of sudden cardiac death in patients at high risk; however, there is a substantial rate of false-positive sudden death classification.

    View details for Web of Science ID 000071254500020

    View details for PubMedID 9424075

  • Quantitative overview of randomized trials of amiodarone to prevent sudden cardiac death CIRCULATION Sim, I., McDonald, K. M., Lavori, P. W., Norbutas, C. M., Hlatky, M. A. 1997; 96 (9): 2823-2829

    Abstract

    Some randomized clinical trials of amiodarone therapy to prevent sudden cardiac death have had positive results and others have had negative results, but all were relatively small. This meta-analysis aimed to pool all trials to assess the effect of amiodarone on mortality and the impact of differences in patient population and study design on trial outcomes.Fifteen randomized trials were identified, and outcome measures were combined by use of a random effects model. The effect of patient population and study design on total mortality was assessed by use of a hierarchical Bayes model. Amiodarone reduced total mortality by 19% (confidence limits, 6% to 31%; P<.01), with somewhat greater reductions in cardiac mortality (23%, P<.001) and sudden death (30%, P<.001). Mortality reductions were similar in trials enrolling patients after myocardial infarction (21%), with left ventricular dysfunction (22%), and after cardiac arrest (25%). There was a trend toward greater risk reduction in trials requiring evidence of ventricular ectopy (25%) than in the remaining trials (10%). The trials using placebo controls had considerably less risk reduction (10%) than trials with active controls (27%) or usual care controls (42%, posterior odds <0.02).Amiodarone reduced total mortality by 10% to 19% in patients at risk of sudden cardiac death. Amiodarone reduced risk similarly in patients after myocardial infarction, with heart failure, or with clinically evident arrhythmia. The apparent inconsistencies among results of randomized trials appear to be due to small sample sizes and the type of control group used, not the type of patient enrolled.

    View details for Web of Science ID A1997YF29500016

    View details for PubMedID 9386144

  • Cost-effectiveness of implantable cardioverter defibrillators relative to amiodarone for prevention of sudden cardiac death ANNALS OF INTERNAL MEDICINE Owens, D. K., Sanders, G. D., Harris, R. A., McDonald, K. M., Heidenreich, P. A., Dembitzer, A. D., Hlatky, M. A. 1997; 126 (1): 1-12

    Abstract

    Implantable cardioverter defibrillators (ICDs) are remarkably effective in terminating ventricular arrhythmias, but they are expensive and the extent to which they extend life is unknown. The marginal cost-effectiveness of ICDs relative to amiodarone has not been clearly established.To compare the cost-effectiveness of a third-generation implantable ICD with that of empirical amiodarone treatment for preventing sudden cardiac death in patients at high or intermediate risk.A Markov model was used to evaluate health and economic outcomes of patients who received an ICD, amiodarone, or a sequential regimen that reserved ICD for patients who had an arrhythmia during amiodarone treatment.Life-years gained, quality-adjusted life-years gained, costs, and marginal cost-effectiveness.For the base-case analysis, it was assumed that treatment with an ICD would reduce the total mortality rate by 20% to 40% at 1 year compared with amiodarone and that the ICD generator would be replaced every 4 years. In high-risk patients, if an ICD reduces total mortality by 20%, patients who receive an ICD live for 4.18 quality-adjusted life-years and have a lifetime expenditure of $88,400. Patients receiving amiodarone live for 3.68 quality-adjusted life-years and have a lifetime expenditure of $51,000. Marginal cost-effectiveness of an ICD relative to amiodarone is $74,400 per quality-adjusted life-year saved. If an ICD reduces mortality by 40%, the cost-effectiveness of ICD use is $37,300 per quality-adjusted life-year saved. Both choice of therapy (an ICD or amiodarone) and the cost-effectiveness ratio are sensitive to assumptions about quality of life.Use of an ICD will cost more than $50,000 per quality-adjusted life-year gained unless it reduces all-cause mortality by 30% or more relative to amiodarone. Current evidence does not definitively support or exclude a benefit of this magnitude, but ongoing randomized trials have sufficient statistical power to do so.

    View details for Web of Science ID A1997WA16500001

    View details for PubMedID 8992917

  • Presentation and explanation of medical decision models using the World Wide Web. Proceedings : a conference of the American Medical Informatics Association / ... AMIA Annual Fall Symposium. AMIA Fall Symposium Sanders, G. D., Dembitzer, A. D., Heidenreich, P. A., McDonald, K. M., Owens, D. K. 1996: 60-64

    Abstract

    We demonstrated the use of the World Wide Web for the presentation and explanation of a medical decision model. We put on the web a treatment model developed as part of the Cardiac Arrhythmia and Risk of Death Patient Outcomes Research Team (CARD PORT). To demonstrate the advantages of our web-based presentation, we critiqued both the conventional paper-based and the web-based formats of this decision-model presentation with reference to an accepted published guide to understanding clinical decision models. A web-based presentation provides a useful supplement to paper-based publications by allowing authors to present their model in greater detail, to link model inputs to the primary evidence, and to disseminate the model to peer investigators for critique and collaborative modeling.

    View details for PubMedID 8947628

  • A METAANALYSIS AT RANDOMIZED TRIALS COMPARING CORONARY-ARTERY BYPASS-GRAFTING WITH PERCUTANEOUS TRANSLUMINAL CORONARY ANGIOPLASTY IN MULTIVESSEL CORONARY-ARTERY DISEASE AMERICAN JOURNAL OF CARDIOLOGY Sim, I., Gupta, M., McDonald, K., Bourassa, M. G., Hlatky, M. A. 1995; 76 (14): 1025-1029

    Abstract

    We performed a meta-analysis of randomized trials that compared percutaneous transluminal coronary angioplasty (PTCA) with coronary artery bypass graft (CABG) surgery in patients with multivessel coronary artery disease. The outcomes of death, combined death, and nonfatal myocardial infarction (MI), repeat revascularization, and freedom from angina were analyzed. The overall risk of death and nonfatal MI was not different over a follow-up of 1 to 3 years (CABG:PTCA odds ratio [OR] 1.03, 95% confidence interval 0.81 to 1.32, p = 0.81). Patients randomized to CABG tended to have a higher risk of death or MI in the early, periprocedural period (OR 1.33, p = 0.091), but a lower risk in subsequent follow-up (OR 0.74, p = 0.093). CABG patients were much less likely to undergo another revascularization procedure (p < 0.00001), and were more likely to be angina free (OR 1.57, p < 0.00001). Thus, CABG and PTCA patients have similar overall risks of death and nonfatal MI at 1 to 3 years of follow-up, but relative risk differences in mortality of up to 25% cannot be excluded. CABG patients have significantly less angina and less repeat revascularization than PTCA patients.

    View details for Web of Science ID A1995TE77800008

    View details for PubMedID 7484855

Conference Proceedings


  • Comparison of Thromboembolic Event Rates in Randomized Controlled Trials and Observational Studies of Recombinant Factor VIIa for Off-Label Indications. Yank, V., Logan, A. C., Tuohy, C. V., Bravata, D. M., Staudenmayer, K., Eisenhut, R., Sundaram, V., McMahon, D., McDonald, K. M., Owens, D., Stafford, R. S. AMER SOC HEMATOLOGY. 2009: 571-572
  • Comparative effectiveness of percutaneous coronary interventions and coronary artery bypass grafting for coronary artery disease Bravata, D. M., McDonald, K., Gienger, A., Sundaram, V., Owens, D. K., Hlatky, M. A. SPRINGER. 2007: 47-47
  • Refinement and validation of the AHRQ patient safety indicators (PSI). Romano, P. S., Geppert, J., Davies, S., McDonald, K., Miller, M., Elixhauser, A. SPRINGER. 2003: 294-295
  • Evidence-based practice for mere mortals - The role of informatics and health services research Sim, I., Sanders, G. D., McDonald, K. M. SPRINGER. 2002: 302-308

    Abstract

    The poor translation of evidence into practice is a well-known problem. Hopes are high that information technology can help make evidence-based practice feasible for mere mortal physicians. In this paper, we draw upon the methods and perspectives of clinical practice, medical informatics, and health services research to analyze the gap between evidence and action, and to argue that computing systems for bridging this gap should incorporate both informatics and health services research expertise. We discuss 2 illustrative systems--trial banks and a web-based system to develop and disseminate evidence-based guidelines (alchemist)--and conclude with a research and training agenda.

    View details for Web of Science ID 000175116800008

    View details for PubMedID 11972727

  • Surveillance systems for bioterrorism: A systematic review. Bravata, D. M., McDonald, K., Smith, W. M., Rydzak, C., Szeto, H., Buckeridge, D., Haberland, C., Dangiolo, M. B., Graham, J., Owens, D. K. SPRINGER. 2002: 184-185
  • Should survivors of myocardial infarction be screened for risk of sudden death? A cost-effectiveness analysis Heidenreich, P. A., Sanders, G. D., Hlatky, M. A., McDonald, K. M., Owens, D. K. ELSEVIER SCIENCE INC. 2000: 550A-551A
  • Cost effectiveness of radiofrequency ablation for treatment of paroxysmal supraventricular tachycardias. Cheng, C. H., Sanders, G. D., Heidenreich, P. A., McDonald, K. M., Lee, B. K., Larson, M. S., Hlatky, M. A., Owens, D. K. SAGE PUBLICATIONS INC. 1998: 458-458
  • RELATIVE RISKS OF BYPASS-SURGERY AND CORONARY ANGIOPLASTY FOR MULTIVESSEL CORONARY-ARTERY DISEASE - A METAANALYSIS Sim, I., Gupta, M., McDonald, K. M., Bourassa, M. G., Hlatky, M. A. LIPPINCOTT WILLIAMS & WILKINS. 1995: 41-41

Stanford Medicine Resources: