\
August 25, 2025

Closing the AI ROI Gap

Fill out the form, and then you will be able to download the whitepaper.
Fill out the form

A recent Tom’s Hardware piece highlighted a stat that’s been floating around in a lot of backchannel conversations lately: MIT Sloan found that 95% of enterprise GenAI projects aren’t showing any measurable impact on P&L. The study, spanning 300 public deployments, 150 leadership interviews, and 350 employee responses, makes one thing abundantly clear: the issue isn’t model quality, it’s misalignment. These tools are failing not because they can’t generate results, but because they’re introduced into environments that aren’t built to support them. MIT points to a persistent “learning gap” between what AI produces and how teams actually operate—a disconnect wide enough to stall even the most promising pilots.

That number hit home, not because it’s surprising, but because it’s showing up in conversations across nearly every industry we touch.

The idea of adding “AI” has become a kind of checkbox across industries. It is something that gets plugged in with high expectations and little planning for what happens after deployment. What's often overlooked is the vital conversation about how a new AI tool will live within the work people are already doing.

This comes up in more conversations than we can count amongst our team and chip companies. Engineering teams don’t need more dashboards or speculative outputs. They need tools that show up with them, in the middle of the complexity, and help move real work forward without adding more overhead.. If what we’re building at Moores Lab AI doesn’t earn its place in the flow of real work, it’s not worth putting in front of a team that’s already running on limited time.

As our senior software engineer Alex put it:


“Most AI initiatives underperform because they don’t integrate deeply enough into the workflows where they’re meant to add value. That’s exactly the problem that VerifAgent was built to solve. Instead of being a disconnected ‘AI assistant,’ it is embedded directly into the hardware verification process, automatically generating UVM test plans, test benches, and test cases that compile and run against real RTL with simulation in the loop. By focusing on integration from the ground up, we ensure the AI’s output isn’t theoretical—it’s production-ready and measurably reduces the months of labor that verification engineers normally spend on verification.”

This isn’t about AI in the abstract. It’s about the work of building silicon where time, confidence, and clarity are often what stand between a good idea and a product that ships. For us, integration isn’t a challenge to solve later. It’s the foundation we started with.

And it’s not just a technical decision. It’s a philosophical one.

That idea about integration being the real lever also resonated with Lucas, another of our senior AI software engineers, who shared his perspective that "AI creates the most value when it disappears into the workflow. Our job is to make the intelligence feel native to the tools verification engineers already use and trust. Our architectural conversations tend to revolve around this central point, and everything else becomes noise.

You’ll be hard-pressed to positively impact your P&L by adding more parts to a workflow (like a totally separate AI step). AI can improve your P&L by dramatically shrinking your time-to-value with the process you already have, or by deleting some of it entirely. In order to achieve that, you must deeply understand and integrate.”

The MIT piece brings language to something that’s been building for a while now: GenAI, on its own, doesn’t create value. But intelligence that understands the context it’s entering—that can support engineering decision-making in real time, with the same rhythm and logic engineers use—that’s something worth building.

And more importantly, it’s something worth using.

You can read the full article here:

https://www.tomshardware.com/tech-industry/artificial-intelligence/95-percent-of-generative-ai-implementations-in-enterprise-have-no-measurable-impact-on-p-and-l-says-mit-flawed-integration-key-reason-why-ai-projects-underperform

The Founders

Download the full white paper

This field is essential
This field is essential
This field is essential
This field is essential
Thank you! Your submission has been received!
Get pdf file
Oops! Something went wrong while submitting the form.
Yellow cube huge
Smarter Design. Lower Cost. Faster Time‑to‑Market.

Contact Us

Need help or have questions about our AI solutions?
We are always available — tell us about your challenges
This field is essential
This field is essential
This field is essential
This field is essential
Thank you!
We will contact you shortly
Okay
Oops! Something went wrong while submitting the form.