Rush University Medical Center in Chicago strives to be a national leader in healthcare quality. We also believe in the vision of the CMS' hospital star-rating program to provide a transparent, consumer-focused measure of quality and safety. We feel, however, that the current construction of the star ratings contains unintended flaws that compromise this measure from its original vision.
In collaboration with several other academic centers, we have identified several issues with the program that collectively lead to bias against large hospitals, urban hospitals and hospitals that provide complex care.
First, large academic healthcare centers that specialize in providing life-saving care to the sickest patients may be penalized due to statistical issues with the star-rating program. Because these hospitals may see a higher number of unplanned, yet medically necessary, 30-day readmissions by high-acuity patients, they may falsely appear to have a readmission problem. Instead they are providing lifesaving care for the critically ill.
A second issue is that the star-rating measures adjust rates based on hospital size—smaller hospitals are given a statistical "handicap" to prevent random variability from affecting their score adversely. The flip side of this, unfortunately, is that large hospitals are forced to ranking extremes just due to the statistical adjustment. This creates an unequal distribution of star ratings based purely on size, not quality.
Third, inner-city hospitals caring for patients with socio-economic issues are ranked lower because they are required to overcome extra barriers not experienced by hospitals in affluent neighborhoods. Despite national consensus that social determinants of health should be adjusted for in rating quality, the star-rating program does not include social determinants in its adjustment.
Finally, a hospital's star rating is calculated using opaque, or "black box," statistical methods, which create inconsistencies between the stars and other CMS quality programs, they add to the difficulty of interpretation by consumers and providers.
These issues could be mitigated with four changes to the current star-rating program:
- Aligning adjustment for socio-economic status in the stars program to that of the Hospital Readmissions Reduction Program would be a logical and consistent method for measuring quality.
- Capping the impact of volume adjustment and incorporating confidence intervals would address issues with volume affecting rates.
- Removing the impact of outlier readmissions on the readmission measure would eliminate the undue influence of individual patients on rates and, we speculate, reduce the risk of adverse outcomes due to unintended consequences of policy.
- Abandoning the latent variable model in the composite rating for the star rating would address its lack of consistency.
With the release of the latest overall hospital star ratings, CMS has issued a request for public comment for potential changes to the measures. This provides an opportunity for stakeholders—both providers and patients—to provide feedback and recommendations for how quality healthcare should be measured. The CMS should be complimented for an openness to discuss changes to the current version of star ratings.
We may even have an opportunity for more personalized measures of quality. Tremendous progress in the use of electronic data has enabled high-quality information to be captured by our electronic health record systems. Patient access to data has similarly been transformed through the use of standards, like FHIR, and inclusion of such data in mobile devices like the smartphone. CMS Administrator Seema Verma has laudably made interoperability and prevention of information blocking key priorities for our healthcare system. These foundational elements lead us to think that the time has arrived for 21st century methods to measure quality care.
Patients deserve high-quality measures that are not one-size-fits-all. The next evolution of measurement should be precise and personalized, guiding patients to the best care possible. The science behind ranking hospitals and providers of one versus the other is complicated. We are hopeful that those doing these rankings will listen to the medical community when information is provided and misleading findings are identified.
To learn more about this research, visit
Dr. Bala Hota is chief analytics officer at Rush University Medical Center in Chicago. Dr. Omar Lateef is chief medical officer at Rush. Thomas Webb is manager of quality improvement at Rush.