Formal Verification’s Value Grows

Experts at the table: Semiconductor Engineering sat down to discuss why formal verification is becoming more important, with Ashish Darbari, CEO for Axiomise; Jin Zhang, product management group director for the Verification Group at Cadence; Sean Safarpour, executive director for R&D at Synopsys; and Jeremy Levitt, principal engineer for Digital Verification Technology at Siemens EDA. What follows are excerpts from that discussion. Part 1 of this discussion is here.


L-R: Axiomise’s Darbari; Cadence’s Zhang; Synopsys’ Safarpour; and Siemens EDA’s Levitt.

SE: In the past, formal struggled to show value. It was pigeon-holed as finding the bugs that simulation missed. But the true value of formal is to reduce the number of problems that simulation has to find. How has that dynamic changed within the user base?

Darbari: One of our customers recently said, ‘If you guys are going to find all the bugs, what are my DV guys going to find? Please do not work on these things. Let them go first.’ And I thought, ‘What exactly is going on here? Have we hired too many DV engineers?’ It is changing. This is a real quote.

Levitt: Originally, formal was billed as finding the really tough bugs, the Intel division bug. But over the past 30 years, we’ve moved it toward making tools easier to use, and ideally, they’ll be finding bugs before simulation does, and you can triage them earlier. In addition, after simulation, it can also help you find those really tough bugs. It really has sweet spots on both ends.

Zhang: People still get excited when you find corner case bugs. That excitement is probably more than just saying, ‘I can find no bugs, I’m signing off.’ But adoption of formal as a signoff tool for blocks reduces the amount of simulation that has to be done, which is in-exhaustive and takes a long time. That is still the biggest value that people, especially management, understand — and the reason why they bring formal in. From the user level, to be able to show off a corner-case bug is really exciting, but from management level, being able to track progress in a similar way that they track simulation progress and coverage makes formal an equivalent counterpart to other verification techniques. The establishment of that value has been critical for the broad adoption that we’ve seen across our customers.

Darbari: If you find a corner case, and if you can actually then prove the fix has been appropriately applied, that’s a game changer for the management. They can see they are not going to be re-spinning, because we know for sure this has been addressed. The value of the proof — an exhaustive proof — is becoming quite apparent when the bug is found. And of course, that’s the one that gets everybody to look at what the formal guys are doing.

Safarpour: I haven’t come across any customer that has deployed formal as much as they would like to. They’re not doing enough formal is always the case — even companies that have hundreds of engineers with the title of formal verification engineer. They are dedicated formal verification engineers, and yet even for those accounts, they also want to do this block, and they want to check this feature, and they want to do this. The demand is definitely there. People who are in charge of verification always want to do more formal, and they’re hitting this bar where they need more experts. How do they train more people? Even on the app side, we have meetings with executives all the time where they admit they are not even using all of the apps that they could. There is this mindset where engineers do what they know, and what they’re good at. If you’ve been running simulations for 20 years, you’ll just keep running simulations, even for easy things like checking connectivity across blocks. They just write a simulation testbench to do it. But these things are really easy for formal to do, and it’s really quick to get the proof, very easy to find the counter examples for cases you missed in simulation. You no longer have to prove to people that you need formal. It’s the opposite. There is a real pull from the market, so for us it’s, ‘How do we make the tool easier to use?’ We’ve come a long way, and the dynamic is a little bit different.

Zhang: The formal consulting business, even after Oski went away, has a growing number of players. This speaks to the fact that there is the growing recognition of the value of formal and the desire to adopt formal to more blocks. There are always more things you want to do with formal than the people you currently have. That’s why there are a lot of service companies coming into the space to try to fill the gap.

Darbari: AI designs are causing a massive pull for formal. It’s because these architectures are so complex. A lot more bugs are likely to surface that otherwise wouldn’t show up, and the cost of a respin of an AI chip is in the hundreds of millions of dollars. If you are late to the market, then lost opportunity is also huge. We have to understand that when Oski (acquired by NVIDIA in 2021) started the business some decades ago, the dominant architecture was x86, with Arm coming in. Now you have RISC-V, you have Arm, you’ve got a whole library of LLM architectures, not just one GPU-centric architecture. It’s a minefield of bugs. There are lots of good problems to solve, both from an application point of view, but also from a solver point of view. The adoption of formal is increasing. I do see some resistance because the DV engineers remain quite comfortable with writing simulation testbenches. If the senior people in the company know that a chip can actually come back for respin, then the argument for formal does not have to be made. Fortunately, or unfortunately, UVM is still the dominant verification technology. We are seen as a second-class citizens in some companies.

SE: Have we actually seen the turning point of people doing less simulation?

Zhang: I’m not sure there will be less because the chip is getting more complex. The verification as a pool is growing. Formal is definitely growing tremendously. But I don’t think that comes by sacrificing simulation or DV engineers.

Levitt: What we do see are some customers that identify particular blocks as new or complicated, and they want extra focus on them. They might designate those blocks to be done formally, because they have had problems in the past, or it is brand new, and they want to make sure things go right. Certain blocks will be designated as going to be verified formally. We’re actually beginning to see this in people’s thinking. What blocks are going to be difficult? What blocks do I want to pass with formal? And then for the rest of it, I’ll do it the same way I have always done it with simulation, and that’s been working for me. You are seeing a change in thinking where more and more people are looking for those blocks they are going to verify with formal.

Darbari: We see blocks that have been predominantly verified with UVM, but they are now being given to us to look at certain features, or certain parts of the design, because they’re considered to be highly risky. I do see a shift. Some of the designs that were considered to be too big for formal are now being verified, like complex memory subsystems, cache coherence subsystems, NoCs, looking at big storage, IP blocks, networking designs, and bigger GPU building blocks. But we still need to do more DV. The scope of DV has shifted to a higher subsystem level. I don’t know how much UVM you can do at the full system level, but that’s pretty much where emulation kicks in. In some companies, I do see fewer simulation cycles and more formal cycles compared to what it was 10 years ago. Despite poor metrics for first-time signoff as shown in recent survey results, the uptake of formal has increased, and whether people are writing properties for the right reason, in the right way, is another question. Eventually, formal and emulation will be used more than simulations.

SE: Improvements in the quality of how properties are being developed has been mentioned a couple of times. You also have mentioned AI and LLMs. Will AI start generating all of the properties that we need, and is it going to be a utopia? What’s wrong with this message?

Levitt: As long as you have people writing the designs, you need people to verify them. You’re going to have people involved in making sure the properties are checking what they want the design to do. Where LLMs really come into play, especially with SVA, it’s because a lot of people struggle to get the syntax right, to figure out the best way to express things. With LLMs, you can describe what you want, and LLMs can fill in that syntax, tell you how to write efficient properties. There’s an immediate benefit when using LLMs to assist in writing properties. It’s not going to generate the properties out of the box, but it’s going to remove the excuse of syntax and formulation, which many users struggle with. It will democratize the writing of properties.

Safarpour: I have a more bullish view on this. In the formal domain, LLM is the thing that is going to change the game. The bottleneck has been the number of formal verification engineers. And, we have this title, formal verification engineer. You shouldn’t need that title to use formal verification. Regardless of whether it’s the specification or usage of a tool, or how you want to get to a proof, if you can explain the problem in natural language or an engineering language, it’s going to lower the barrier of adoption. People are going to be able to use the tool much more easily. And you look at the chip design side, where we’re seeing some of the biggest movements with LLMs and generative AI, formal is making a lot of progress. These are real problems that are preventing people from getting to the next level. I see the impact of generative AI on formal as being able to create the entire formal testbench, or cockpit, to verify a particular block. It’s going to be a while until the LLMs are good enough, or we have enough data, enough stuff in the RAG or otherwise, to get the quality to where we need it to be. But even today, we have customers saying they’re getting 25% to 35% improvement in terms of their productivity in getting formal. The challenge will always be there. I don’t think the problem is going away. We can get to the easy problems faster, and then there’ll be more time to spend on the harder problems. That is where you need people to think about the next thing. Can I model this better? How can I reduce this problem from 18,000 cycles and get it to prove at 18? Those problems are always there because the problem is NP complete, so LLMs are not going to be able to solve it, but they’re going to do an amazing job opening it up to a bigger set of users.

Darbari: I see the barrier to writing basic SVA and grappling with syntax has been a pain point. We tried to solve this by training. It’s working out for some customers, but LLM-based technology, or AI technology, can create a lower barrier to get you started, and maybe even allow you to think about things like an agent, like a copilot. We are not yet able to align with specifications — complex architectural, micro-architectural specifications. They are often thousands of pages. Designs are hardly ever built from scratch. They’re usually derivative specifications, and to be able to even think about writing a complete set of properties is still a few weeks or months or years away, depending on how quickly you guys are going to solve these problems. Then there is the problem of debug and proof convergence. All of these aspects would benefit. The biggest problem is data. This is why a data-driven approach on its own may not be adequate. You probably need more advanced AI techniques, as well, because garbage in/garbage out is what can happen with LLM-based methods.

Zhang: I am super excited about where we are in this juncture. Generative AI will definitely transform how people do formal verification. If you look at the challenges today — writing the assertions, coverage, debugging, proof, and signoff — each of those stages has challenges that people need to overcome. Those challenges are making it hard to widely adopt formal. At each stage, there will be things GenAI can help to solve. Talking about SVA generation is just the first step. For a lot of companies thinking about how to apply GenAI in verification, generating SystemVerilog assertions is the first idea. I don’t think we will get to that utopia yet, where everything generated is perfect. Even doing my own coding, trying to create an app, you think that writing that level of software should be a lot easier. The LLMs are winning programming awards, but I still struggle just to get the LMM to do exactly what needs to be done. Sometimes you fix something and you break something else. That level of frustration will be reduced with time. But we’re not here yet. We’re probably at level two or three in our march toward autonomous verification. But everyone is pushing forward, not just the EDA companies. Even our customers are looking very closely at what GenAI can do in the formal verification process. We are talking about solving some of the bigger challenges, but even for the little things that they do every day, GenAI can help smooth out that process. It’s a huge value. I’m super excited to see a year or two from now whether we’ll get to that utopian stage, and whether we really allow people to do white-coding for formal verification. That will be a really good goal for us to shoot for.

The post Formal Verification’s Value Grows appeared first on Semiconductor Engineering.

wpChatIcon
wpChatIcon