RAI for the RAI

Marc Zimmet

Note to reader: This document expresses Zimmet Healthcare’s professional opinion on Artificial Intelligence in the resident assessment process; it is not intended to promote any product or service. Per firm policy, we offer no commentary on specific companies or their respective offerings.

Responsible Artificial Intelligence for the Resident Assessment Instrument

Medicare shifted reimbursement’s center of gravity from cost reports to the Minimum Data Set (“MDS”) in 1999. The MDS drives rate setting for most Medicaid systems as well, superimposed on OBRA ’87’s mandate to improve quality and care planning in Skilled Nursing Facilities (“SNFs”). The MDS has been updated through the years; while no major structural changes are on the horizon, it feels as if a major transition is upon us.

Is this the end of the “Manual Data Set”

Technology is disrupting MDS-based reimbursement management more than any other area of SNF operations, and no company is disrupting Zimmet Healthcare’s legacy consulting services more than Zimmet Healthcare. Over 2,500 providers use our PDPM-Connect software or one of CMI-Connect’s 32 iterations tailored to state-specific Medicaid CMI mechanics. We’re not alone; the market for “MDS efficiency” software is growing rapidly.

These applications continuously scan each resident’s medical record, relieving the tedious burden of manually searching fragmented documentation. There are two approaches to facilitating the assessment process. Our Connect tools generate Alerts as reimbursement-sensitive services and conditions are entered into the EHR, with findings reported outside the MDS environment.

Two years ago, we were pursuing a different approach. We introduced a browser extension that replicated the interface of a facility’s MDS software, eliminating the perception that a third-party application sat between the user and assessment. At that point, we believed autonomous MDS completion was not only possible, but inevitable.

Then a nurse, working in a hospital 1,000 miles away, was found guilty in a case involving EHR process automation. The defense skillfully, but unsuccessfully, argued that inconsistencies between two electronic health record systems were to blame. That case led us to halt development of tools that operated within the MDS environment and instead pursue the differentiated Alert reporting that defines our Connect suite of applications today.

Minimum Data Suggestions

Clinical judgment is the foundation of compliant resident assessment. Automating MDS completion crosses a line that’s too wide to ignore. At issue are AI-generated “prompts” that function as digital proxies for clinical observation, offering what their algorithm determined was the most likely answer from the medical record. When embedded within the MDS environment, these “suggestions” act more often like directives than neutral support tools. That is not opinion, it’s well-documented behavioral science.

The Psychology Section

En route to an Accounting degree I barely earned, the two most impactful college courses I took were in Psychology and Astrophysics. That hardly qualifies me as an expert in either field, but these studies taught me that objective observation is the bedrock of truth, while the power of suggestion is undeniable, even with clear empirical evidence to the contrary. Applied here, algorithmic “suggestions” heavily influence decision-making. Interdisciplinary input and time constraints make the MDS process particularly susceptible to the behavioral responses of anchoring, confirmation bias, and cognitive offloading. In other words, we subconsciously process prompts as authoritative, even when we know the source lacks the nuanced clinical judgement and insight required by CMS for resident assessment.

The Machine Did-it Set

CMS mandates that each MDS reflect the resident’s clinical condition as directly observed. AI degrades that standard by shifting the assessor’s role from judgment to agreement. It’s not intentional, it’s human nature. But CMS does not consider intent when assessing accountability – CMS does not empathize. Intent is irrelevant if the effect is systemic, when even low probability risk compounds at scale. In behavioral terms, this is a priming mechanism. In regulatory terms, until CMS explicitly permits it, prompting is a liability.

The Miscellaneous Dilemma Set

Most software agreements disclaim vendor liability, stating that the provider understands that AI findings are not infallible, yet assumes full responsibility for the consequences of non-compliant outputs. When validation processes are not maintained, the provider is exposed to claims of deliberate “intent” by knowingly billing for services that may not have been delivered.

There were other issues we had yet to reconcile specific to the browser overlay, including HIPAA vulnerabilities and violations of EHR user agreements, and this was before the OIG issued its annual work plan that targets EHR manipulation for reimbursement purposes. After 30 years focused on compliance, we weren’t taking those risks.

Regulatory Expectations

CMS has not issued specific guidance on AI-enabled MDS tools, but existing standards are clear – the MDS process requires clinical judgment, interdisciplinary collaboration, and RN validation. The issue with reviewing prompted findings is that suggestion has already biased the process. In other words, the MDS must reflect direct clinical input, not AI probability matrices. Specifically:

  • Flawed systems are not a defense
  • Delegating clinical decisions to automation is prosecutable
  • Submitting claims with known error rates is willful negligence

Technology adapts quickly; Skilled Nursing does not. Software is helping providers make great strides in care management and administrative efficiency. Just remember that AI’s ability to perform an assessment does not change the regulations governing that assessment. The same goes for every aspect of a world changing so quickly it’s hard to keep up. The pace of change is accelerating, which means that at some point, we won’t be able to. That time is not yet upon us.

Zimmet Healthcare offers the following guidance for SNF operators using or considering AI- powered MDS efficiency software:

  • Validate AI-assisted MDS workflow with clinical oversight. This must be mandated and referenced in the provider’s corporate compliance plan.
  • Avoid “auto-suggestion” tools as they create potential compliance risks. Use AI to confirm human observational entries.
  • Explicit, rational audit trails and RN validation and sign-off are essential.
  • Monitor regulatory developments closely. Noncompliance is not excused by administrative oversights or third-party software updates.
  • Prepare a response toolkit for routine audits of AI-assisted claims. It should be amended to reflect AI embedded in the MDS workflow and include policies for unbiased authentication.
  • If you believe your validation process requires an external audit to ensure staff are following appropriate protocols for AI-assisted assessments, perhaps AI-assisted assessments are inadvisable for your facility.

In the meantime, the question isn’t whether to explore MDS-efficiency tools, but how to balance workflow gains against compliance risk. We advise against adopting systems that generate answers requiring validation, because in practice, the validation process is likely to fade. Skilled Nursing operators have many needs; an MDS deepfake isn’t one of them.