A Q&A with Ray Han, VP and Senior Director for Human Services Innovation and Products 

Fraud, waste, and abuse are often discussed as discrete problems that require additional oversight or enforcement. But in practice, program integrity issues are frequently symptoms of deeper structural challenges: fragmented systems, operational pressure, and insufficient capacity to do the work thoughtfully. As part of our series on fraud, waste, and abuse, we spoke with Ray Han, Vimo’s VP and Senior Director for Human Services Innovation and Products, about how program integrity is shaped long before issues surface and why building capacity through better design, automation, and verification is central to preventing problems without restricting access. 

When people talk about fraud, waste, and abuse, what do you think is often misunderstood? 

One of the greatest misunderstandings is treating fraud, waste, and abuse as a single problem with a single solution. They’re related, but they’re not the same thing. Fraud is criminal behavior. Waste is often error. Abuse sits somewhere in between. When we put all of that into one bucket, we tend to design responses that don’t match the problem we’re trying to solve. 

Another misconception is assuming these issues are primarily about bad actors or lack of oversight. In practice, most program-integrity issues are symptoms of how work is structured: how systems, policies, and processes interact with workload and capacity. When people are navigating disjointed rules, fragmented systems, and intense time pressure, mistakes happen. That’s not a moral failing: it’s a design challenge. If we want to improve program integrity, we have to start by understanding those conditions, not just by adding more checks after the fact. 

From your experience working directly with state programs, which conditions tend to create the most exposure to program integrity issues? 

The greatest exposure comes from the combination of fragmented systems and operational pressure. When I say “systems,” I don’t just mean technology. I mean processes, rules, policies, and tools that don’t align cleanly with one another. Workers are often expected to navigate all of that complexity while handling increasing workloads and tight timelines. There’s constant pressure to move faster and do more with fewer resources. Under those conditions, it becomes very difficult to slow down, interpret policy nuance, or ask the next question that might surface an issue. That intersection of disjointed systems and limited capacity is where program integrity issues most often emerge. 

You’ve talked a lot about capacity. Why is that so central to program integrity? 

Because capacity determines whether people can do their jobs well. Capacity isn’t about reducing the workforce: it’s about giving workers the time and tools to focus on the work that actually requires human judgment. If someone spends their day acting like an electronic typewriter, just keying things in, they don’t have time to understand policy nuance, ask follow-up questions, or identify potential issues. Creating capacity means automating the low-risk, repetitive work so humans can focus on higher-risk, higher-complexity cases. When you create that space, you reduce waste because people aren’t rushing. You reduce fraud because you can actually apply scrutiny where it matters. 

How do technology and automation help create that capacity in practice? 

The key is that technology shouldn’t be treated as a bolt-on solution for fraud, waste, and abuse – it has to be part of the core design. When verification and automation are built directly into the process, you reduce risk while making the experience faster and more efficient. Digital verifications are a good example. When verification happens upfront and in line with the application, you eliminate a significant amount of exposure right away. And at the same time, you improve the user experience by reducing manual steps and follow-up work.  

Automation plays a similar role. By automating low-risk, repetitive tasks, you free workers from acting like electronic typewriters. That capacity can then be redirected toward higher-risk cases and more complex situations that require human judgment. The result is better outcomes across the board: not just stronger program integrity, but a more sustainable way of working too. 

How do agencies sometimes respond to capacity constraints in ways that unintentionally increase risk? 

When capacity is tight, one common response is to reduce the amount of verification required to move work faster. That approach is often legal and aligned with policy, and it can feel like the only viable option in the moment. 

The problem is that it creates downstream exposure. When verification is loosened, fraud, waste, and abuse increase, and the response then becomes about adding quality control teams or oversight layers to catch issues later. That drains even more capacity, which puts additional pressure on the system and often leads to even looser rules. You end up in a cycle that’s very hard to break. Instead of asking how to move work faster by reducing controls, the better question is how to create capacity through automation and upstream verification so those controls don’t become a bottleneck in the first place. 

What does building capacity look like when it’s actually implemented in a real system? 

In practice, building capacity means designing systems that do as much of the low-risk, routine work as possible automatically so human effort is reserved for situations that actually require judgment. That’s the philosophy behind technology that handles workflows. Verification is handled up front and in line with the application whenever possible, using digital data sources to confirm information in real time. That reduces the need for follow-up work, eliminates a large amount of risk early, and allows applications that are straightforward to move through the system efficiently. 

At the same time, when a case is more complex or when someone needs in-person assistance the system hasn’t exhausted staff capacity elsewhere. Workers have the time to slow down, review documentation carefully, and help families navigate complicated situations without rushing. Capacity isn’t created by lowering standards; it’s created by designing workflows that match the level of effort to the level of risk. The result is a system where prevention is built in, access is preserved, and staff can focus on helping people rather than managing backlogs or chasing errors after the fact.