I’m hopeful that these changes will reward the provider groups focused on the right things. Risk scores are still projected to increase around 3%, but the winners and losers will be split by chart review dependency, not quality.
#MedicareAdvantage #HealthcareFinance #ValueBasedCare #RiskAdjustment
Posts by Mason Roberts
These are major shifts, but on the whole there’s only one of these that I don’t like. I’ll let you guess which one that is.
Star Ratings
1) Removing 12 (mostly administrative) measures from Star Ratings.
2) Health Equity Index reward being eliminated before it ever launched.
3) Historical reward factor reinstated which will likely favor incumbents, not improvers
Risk adjustment
1) Update training data from 2018–19 to 2023–24, which will mean a shift in coefficients.
2) Removing “unlinked” charts reviews from the calculation.
Dual-arrow diagram titled "Navigating MA Changes for Contract Success." Left arrow (blue, risk adjustment): Training Data Update — shift in coefficients; Unlinked Chart Reviews — removed from calculation. Right arrow (gray, Star Ratings): Measure Removal — administrative measures dropped; Health Equity Index — reward eliminated; Historical Reward Factor — favors incumbents.
Your payer partners' financial pressure is your contract pressure.
CMS is proposing some major shake-ups in MA for risk adjustment and Stars. Here’s a quick summary for both. For providers in VBC deals, this is going to directly impact the terms of your contracts.
The rate book transition (timing depends on your region) is designed to lock in that wedge as a permanent feature of regional benchmarks rather than letting it collapse as the ACPT catches up to realized savings.
#ValueBasedCare #ACO #CMMI #HealthPolicy #REACH #LEAD
RFA: buff.ly/BwXrHfw
1) The ACPT (trending benchmarks above realized spending when ACOs slow growth)
2) The benchmark add-ons (1.5% admin add-on for higher-spenders; regional efficiency adjustment for lower-spenders)
CMS is explicitly targeting 3% savings between average benchmarks and average actual claims by year 5, mainly engineered through two levers:
Higher-spending ACOs get room to improve; lower-spending ACOs get a stable floor and both have an incentive to stay.
How do they accomplish this?
Well, it’s complicated, but here’s the TL:DR
LEAD is taking a “wedge” approach instead of the REACH rebasing which punished those who did exceptionally well (do better, get a tougher target). LEAD benchmarks are set above observed expenditures but below projected FFS growth, creating a durable shared savings corridor.
Let’s get a little deeper if you’ve got time.
Benchmarking.
Duals - your highest cost patients. LEAD includes a planning phase to develop Medicare-Medicaid partnership arrangements with select states
Those were my top three. There’s other improvements that shows CMMI is actively learning and listening, finding ways to improve their programs.
Podium-style infographic titled "LEAD Program Innovations" showing three medal placements. First place: Benchmarking — durable shared savings corridor with stable floor and incentive to stay. Second place: Specialists — built-in infrastructure for downstream episode-based risk between ACOs and specialists. Third place: Duals — planning phase for Medicare-Medicaid partnership arrangements with select states.
LEAD vs. REACH: What's actually new. No pussyfooting around - let’s get right to it.
Benchmarking - This one is pretty huge. No more rebasing.
Specialists - LEAD has built-in infrastructure for downstream episode-based risk between ACOs and Preferred Provider specialists (CARA)
#ValueBasedCare #QualityMeasurement #PatientExperience #AccountableCare #HealthPolicy
buff.ly/jnn7T6b
survey-based trajectory measures could capture something that the data misses; a fragmented diagnostic journey. How else do you measure what didn’t happen?
I’m interested in hearing your perspective. Are you using quality measures in your VBC deals? Which ones do you rely on?
4) Adverse selection incentives - if groups are judged on outcomes, the rational response is to avoid attributing high-risk patients, not to serve them better
I like process measures. They’re easy to understand and create from data. But...
3) Risk adjustment inadequacy - outcomes measures without near-perfect risk adjustment systematically punish providers serving sicker, poorer, more complex populations; this is the same equity critique leveled at Stars
2) Attribution ambiguity - for complex chronic conditions, which provider or group gets credit or blame for a 5-year outcome? MS progression is influenced by genetics, socioeconomics, prior treatment, and care quality simultaneously
I’m tempted to be purest - use data, use outcomes - but there are some benefits to some surveys that can’t be ignored:
1) Attribution lag - many meaningful outcomes (disease progression, mortality, quality of life over a decade) take years to manifest; payment cycles don't wait
Well, first, we can move towards more outcomes measure based quality metrics. But, as Renu Xu argues, we can move towards trajectory measures. For example: diagnostic velocity would help to assess the efficiency of a patient’s path from first evaluation of symptoms to a correct diagnosis.
A balanced scale diagram titled "Balancing Outcomes and Surveys in Value-Based Care," showing Outcomes Measures on the left pan weighted down by Attribution Lag and Risk Adjustment Inadequacy, and Survey-Based Measures on the right pan weighted down by Attribution Ambiguity and Adverse Selection Incentives.
Patient reported surveys just don’t cut it.
-- Response rates are low
-- They’re biased towards more privileged patients
-- And they’re often gamed (see my last post)
So what to do about it?
-- Eliminating the EHO4All reward leaves a real policy gap. Plans serving vulnerable populations consistently score lower — bring it back.
So next time comments become available, let’s push for this. Maybe we can better align these incentives towards the outcomes we all want.
#MedicareAdvantage
-- Move prior auth practices, provider payment patterns, marketing, and encounter data completeness off Stars and into a standalone MA Transparency Scorecard
-- Congress needs to make the QBP budget-neutral. CMS can't do it alone through rulemaking
-- Narrow Star Ratings to outcomes, population health, and patient-reported experience measures
-- Shift to plan-level scoring so beneficiaries can compare options in their actual county
I was reading a Health Affairs article on just this and I liked their recommendations. I’d summarize it as “Better Stars + an MA Transparency Scorecard”.
buff.ly/XT7h1cw
Here’s what they proposed:
$87B in bonus payments since 2015, with no budget-neutrality requirement unlike other Medicare quality programs (e.g. risk adjustment).
So what do we do about it?
Currently, the Star rating program rewards the wrong things. There’s 40+ measure across 9 domains and they are overweighted toward process metrics and administrative indicators. So, what do you get?
Better documentation, not care delivery. And it’s expensive.
One of the major challenges i see repeatedly in my world is it’s extremely difficult to align incentives with outcomes. Here’s an example from the Medicare Advantage space, the Star ratings.
you do need to be ready to adapt and learn to shift along with the changing systems.
#HealthcareAI #HealthcareWorkforce #ValueBasedCare #HealthcareInnovation #HealthcareLeadership
buff.ly/tHrAyzZ
If you’re super good at planning out a project and writing prompts, you can create personalized software.
So, if you’re worried about your job prospects, it’s time to get AI literate. You don’t need to be a coding wiz or the person volunteering for the pilot, but