A Q&A with Vimo Experts Bill McCracken and Kim Bazan 

Program integrity is a constant responsibility for agencies administering public benefits, shaping how programs are designed, staffed, and operated over time. Although fraud, waste, and abuse are often discussed as problems to be addressed after benefits are issued, the conditions that shape program integrity emerge much earlier, as part of how work is structured and carried out day to day. 

As part of our series on fraud, waste, and abuse (FWA), we wanted to better understand how FWA take shape in real-world operations. To explore this, we spoke with Bill McCracken, Vimo’s Medicaid, SNAP, and Safety Net Practice Lead, and Kim Bazan, Vimo’s Director of Client Success, about what they see across agencies and how process design, organizational change, and workflow-aligned technology influence outcomes long before issues surface. 

When organizations talk about fraud, waste, and abuse, what do you think is often missing from the conversation? 

Bill: What’s often missing is an understanding of when and where risk actually enters the process. Fraud, waste, and abuse are usually discussed as downstream issues: things to be detected after benefits are issued. But many of the contributing factors show up much earlier, especially during high-volume periods when staff are under pressure to move work quickly. 

Kim: I’d add that the conversation often overlooks how complex these environments really are. Staff are navigating nuanced policy, multiple systems, and incomplete or inconsistent information – sometimes all at once. When guidance isn’t embedded directly into workflows, staff and supervisors fill in the gaps however they can, which introduces inconsistency and risk. 

What’s often missing is a focus on upstream design: how processes, technology, and training work together to surface the right information at the right time. When that alignment is there, you don’t have to rely on only catching issues later: you reduce the likelihood of them happening in the first place. 

Where do you most often see risk introduced in day-to-day operations? 

Bill: One of the most common places is during predictable surge periods, especially around renewals. These are times when workload volume is at its peak and timelines are tight. Staff are doing a tremendous amount of work very quickly, and the process can become focused on moving cases through rather than closely examining what may have changed. That’s when risk for errors tends to increase – not because people aren’t capable but because the system doesn’t always slow things down where it matters. If workflows don’t prompt deeper review or follow-up during those moments, subtle issues can be missed. 

Kim: Another major source of risk is at handoff of cases between teams or systems, not because work is shared but because crucial context isn’t captured consistently as the case progresses. Since many agencies distribute responsibilities to manage volume and maximize staff capacity, then challenges can arise when cases move between teams without consistent documentation standards, shared visibility, or clear decision context. In those situations, institutional knowledge might stay with the person who first worked the case, and this means the next worker may be navigating multiple systems and relying on notes that don’t fully capture the original decision.  

How does policy nuance – and the way guidance is communicated – contribute to risk? 

Bill: Even when policy exists, it isn’t always as clear or as straightforward as people assume. A lot of eligibility policy is nuanced and open to interpretation, and clarifications often come through informal channels – emails, one-off guidance, or policy updates – that aren’t always easy to locate when someone needs them. When workers don’t have clear, embedded guidance at the moment they’re making decisions, they fill in the gaps as best they can. That can lead to inconsistency, especially during high-volume periods when there isn’t time to go searching for answers. 

Kim: And it’s important to recognize that many of the issues that surface aren’t intentional wrongdoing. They’re often the result of many factors: a misunderstanding of how income is reported, confusing biweekly versus monthly earnings, or not realizing how a change in circumstance should be reflected. The challenge for agencies is helping staff distinguish between normal client error and true anomalies that warrant additional verification. That distinction isn’t always obvious unless policy guidance, training, and workflow prompts are closely aligned. 

Bill: That alignment is critical. If policy is clear on paper but not translated into system logic or worker guidance, you end up relying on individual judgment. The more the system can support that judgment – by making expectations explicit and easy to follow – the less risk you introduce overall. 

How have digital and remote workflows changed verification and authenticity challenges? 

Kim: As more interactions move online or over the phone, verification becomes both easier and more complex. On the one hand, it’s faster to collect information and documentation. On the other, it’s harder to assess authenticity when documents are uploaded electronically and there’s no in-person interaction.  

Bill: The shift away from face-to-face interactions has changed the signals workers rely on. In the past, eligibility determinations involved handling physical documents and meeting with people directly. Today, much of that work happens digitally, which increases efficiency but introduces new kinds of risk. That doesn’t mean digital channels are a problem. It means systems need stronger verification and monitoring built in. Identity verification, multi-source data checks, and clear escalation paths help staff focus their attention where it’s most needed. 

Kim: And it’s not just about individuals. Some of the more effective approaches look for patterns like multiple applications coming from the same source, unusual activity across cases, or anomalies that stand out from normal behavior. Image manipulation and fabricated documentation are becoming more accessible, which means agencies can’t rely on the assumptions they once did.  

At the same time, most investigations don’t ultimately confirm intentional fraud: many issues turn out to be overpayments or innocent errors. That makes it even more important for agencies to be thoughtful about how and when additional scrutiny is applied – they can then respond to emerging risks without creating unnecessary barriers for legitimate applicants.  

Why is it ineffective to treat every case as equal risk? 

Bill: Not every case carries the same likelihood or impact of error, but many systems are designed as if they do. When everything is treated as high risk, you slow down legitimate work and overwhelm staff without actually improving outcomes. What we’ve seen work better is identifying where errors are most likely to occur and focusing effort there. That requires looking at historical data, understanding error patterns, and acknowledging that some case characteristics are more prone to issues than others. 

Kim: There’s also a human factor. If workers are asked to scrutinize every case equally, they either burn out or start to rely on shortcuts. Risk-based approaches help staff apply judgment where it matters most, rather than spreading attention too thin. The goal isn’t to create barriers: it’s to be thoughtful about where additional review actually adds value. 

What does an effective early, risk-based review approach look like? 

Bill: One approach we’ve used is identifying scenarios that have a higher likelihood of errors and building in an early checkpoint – we call this a pre-authorization audit. States can define clear criteria such as higher benefit amounts, larger households, income over a set threshold, or error patterns already showing up in QC data. These cases can then be flagged for a second look before benefits are finalized rather than relying on post-issuance quality review. The goal isn’t to re-review every case: it’s to focus attention where early intervention is most likely to change the outcome. 

Kim: That kind of preventative approach is far more effective than “pay and chase.” Once benefits are issued and spent, resolving issues becomes expensive and time consuming, and often the funds aren’t recoverable. Early review doesn’t have to be heavy-handed. When it’s targeted and informed by data, it can make work easier because staff know which cases deserve closer attention and which can move forward without delay.  

Bill: In states that have implemented this approach thoughtfully, we’ve seen measurable improvements. For example, one state tied pre-authorization review to defined risk criteria and focused reviews on key drivers like household composition, income, and shelter costs. In two years, that state saw its payment error rate decline from 11.54 percent to 9.42 percent while periodically revisiting the criteria to keep the review workload targeted and manageable. 

Across states using pre-authorization audits, the most effective implementations typically limit early review to a relatively small share of cases, often around 10 to 15 percent, and focus on preventing higher-dollar errors rather than increasing overall review volume. When the criteria are well calibrated, agencies report lower error rates for higher-impact cases, less rework after the fact, and little to no impact on timeliness.  

Another consistent theme is that this works best as an iterative process. States that adjust their criteria over time – by using QC trends, dollar-value analysis, and staff feedback – tend to sustain improvements while avoiding unnecessary friction. The focus stays on prevention where it matters most rather than expanding review across the board. 

How do organizational change management, process design, and technology work together to support this approach? 

Kim: Technology is only one part of the equation. You can build sophisticated tools, but if staff don’t understand how or when to use them, or if the workflows don’t reflect how work actually gets done, they won’t deliver the intended results. That’s where organizational change management and business process redesign come in. It’s about clarifying roles, embedding guidance into workflows, and ensuring staff are trained and supported to use new tools consistently. 

Bill: We also see a lot of value in helping agencies fine-tune what gets flagged and why. If systems generate too many alerts, people stop trusting them. Consulting support helps agencies assess what’s high value, reduce noise, and focus on indicators that are most likely to surface real issues. When people, process, and technology are aligned, prevention becomes part of everyday operations, not an extra layer of work added on top. 

Continuing the Conversation on Fraud, Waste, and Abuse

Fraud, waste, and abuse are often discussed as isolated problems, but as this conversation highlights, they are deeply connected to how programs are designed, staffed, and supported over time. Approaches that focus on capacity, targeted review, and upstream verification offer a path toward stronger program integrity without adding unnecessary friction for staff or the people they serve. 

This article is part of Vimo’s ongoing series on fraud, waste, and abuse, which explores the issue from multiple perspectives – from system design and capacity, to day-to-day operations, policy nuance, and the practical realities agencies face in balancing access and integrity. Together, these conversations examine how thoughtful process design, organizational change, and technology can work in concert to prevent problems before they occur.