Healthcare Quality Measures: How US Facilities Are Evaluated
A hospital can have gleaming equipment and a long waitlist of patients, and still deliver care that causes harm. Quality measures exist precisely because reputation and resource levels don't reliably predict outcomes. This page explains how US healthcare facilities are formally evaluated — what gets measured, who does the measuring, and why the line between a four-star and a one-star rating can translate into real differences in survival rates.
Definition and scope
Healthcare quality measures are standardized metrics used to assess how well a provider, facility, or health plan delivers care relative to defined clinical and operational benchmarks. The Centers for Medicare & Medicaid Services (CMS) administers the largest national measurement infrastructure, collecting data across hospital types and designations from acute care facilities to skilled nursing homes to dialysis centers.
The term "quality" in this context spans four distinct domains recognized by the Agency for Healthcare Research and Quality (AHRQ): safety, effectiveness, patient-centeredness, and timeliness. The National Academy of Medicine (formerly the Institute of Medicine) added equity and efficiency in its landmark Crossing the Quality Chasm report, expanding the framework to six dimensions — a structure that still shapes most federal and accreditation-body programs.
Scope matters here. Quality measurement applies not just to hospitals but to the full landscape of primary care in the US, ambulatory surgical centers, long-term care options, and community health centers. A metric designed to track sepsis bundle compliance in an ICU is structurally different from one tracking childhood immunization rates in a pediatric practice — yet both live under the same federal quality improvement mandate.
How it works
The mechanics of quality measurement run on three interconnected tracks: data collection, public reporting, and payment consequences.
Data collection draws from administrative claims, electronic health record (EHR) submissions, patient surveys, and direct chart abstraction. The Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey, administered by AHRQ, captures patient experience data — whether a nurse explained medications clearly, whether discharge instructions made sense — across more than 2,000 participating hospitals (AHRQ CAHPS program).
Public reporting channels this data into transparency tools. CMS's Care Compare platform consolidates ratings for hospitals, nursing homes, home health agencies, and clinicians in one searchable interface. Hospital star ratings compress dozens of underlying measures — mortality rates, safety metrics, readmission rates, patient experience scores, and timely care data — into a single 1-to-5 star display. As of the 2023 methodology update, CMS weights mortality measures at approximately 22% of a hospital's overall score (CMS Star Ratings methodology).
Payment consequences are where ratings stop being academic. The Hospital Value-Based Purchasing (VBP) program adjusts Medicare payments up or down based on quality performance — bonuses and penalties that can shift a hospital's reimbursement by as much as 2% of total Medicare base payments. The Hospital Readmissions Reduction Program (HRRP) penalizes facilities with excess readmissions for conditions like heart failure and pneumonia, with penalties capped at 3% of Medicare payments (42 U.S.C. § 1395ww).
Common scenarios
Three evaluation contexts illustrate how this system plays out in practice.
-
Inpatient acute care hospitals are measured on 30-day mortality rates for conditions including acute myocardial infarction, heart failure, and COPD. They also report on healthcare-associated infection (HAI) rates — central line-associated bloodstream infections, for instance — tracked through the CDC's National Healthcare Safety Network (NHSN). A facility with a standardized infection ratio (SIR) above 1.0 is performing worse than the national baseline.
-
Nursing homes receive a Five-Star Quality Rating from CMS that incorporates three sub-ratings: health inspections, staffing levels, and quality measures like the percentage of residents experiencing pressure ulcers or receiving antipsychotic medications. Staffing data, now pulled from the Payroll-Based Journal (PBJ) system rather than self-reported figures, made ratings more accurate — and, in many cases, sharply lower — after mandatory payroll-sourced staffing reporting took effect in 2018.
-
Health plans under the Medicare Advantage program are rated through the Star Ratings system on metrics spanning preventive care and screenings, chronic disease management, and member experience. Plans with 4 or more stars qualify for quality bonus payments, creating a direct financial incentive tied to population-level care metrics rather than individual encounter data.
Decision boundaries
Not every metric belongs on every report card. CMS and accrediting bodies like The Joint Commission use formal inclusion criteria: a measure must be evidence-based, meaningful to patients, actionable by providers, and feasible to collect without creating excessive administrative burden. The National Quality Forum (NQF) endorsement process is the primary vetting mechanism, requiring measures to demonstrate scientific acceptability, usability, and validity before entering federal programs.
Process measures and outcome measures occupy different analytical territory — and this distinction matters when interpreting ratings. A process measure tracks whether a recommended action occurred (e.g., was aspirin administered within 24 hours of an AMI admission?). An outcome measure tracks what actually happened to the patient (did they survive 30 days?). Process measures are more controllable and easier to attribute to facility behavior; outcome measures are the ones patients ultimately care about but are harder to risk-adjust fairly.
Risk adjustment is the quiet technical battleground of quality measurement. Facilities serving higher proportions of low-income, elderly, or medically complex patients often show worse raw outcome numbers — not necessarily because care is worse, but because the starting population is sicker. CMS applies hierarchical regression models to adjust for patient demographics and comorbidities, though researchers and hospital associations continue to debate whether existing models fully account for social determinants of health. The intersection of healthcare access and equity with measurement methodology is an active policy question, and the stakes for safety-net hospitals are significant.