Experts at the Table: Semiconductor Engineering sat down to discuss the state of functional verification with Mohan Dhene, director for architecture and design at Alphawave Semi; Andy Nightingale, vice president for product management and marketing at Arteris; Dinesha Rao, senior group director for software engineering at Cadence; Chris Mueth, new opportunities business manager at Keysight; Gordon Allan, director of verification IP products at Siemens EDA; and Frank Schirrmeister, executive director for strategic programs and system solutions at Synopsys. What follows are excerpts of that discussion. Part 1 is here. Part 2 is here.
(L-R) Dinesha Rao, Mohan Dhene, Gordon Allan, Chris Mueth, Andy Nightingale, Frank Schirrmeister.
SE: What new verification methodologies are needed to tackle the complexity problem?
Mueth: If you gave us all a spec, we would all come up with different designs, different implementations. The deviation happens during implementation. You’re designing to the spec. You buy those implementations. But how we verify them is going to be different. We may have different methodologies for achieving the same requirements, and therefore there are different dimensions associated with how we might go about verification. Automation is the missing piece. How do you approach it? You could approach it with AI. There are tools out there that use AI to come up with test cases that you haven’t thought of, but it’s still probably not good enough.
Schirrmeister: Why hasn’t portable stimulus, PSS, taken over the world? We introduced it more than 10 years ago, and yes, it’s used in some cases. There are distinct use cases where it’s applied. But it was exactly trying to do what constrained random was to IP at the system level. Why hasn’t it taken over the world?
Dhene: If you look at other tools like emulation, that’s not new. It was there two decades ago, but it started to be adopted for new applications very recently. Finally, you got to a place where you can run certain workloads, which was impossible to do in simulation. You get to that inflection point. It’s similar with formal solutions. Formal solutions were there in ’90s, and they were a small part of the work done by some teams, but formal could not fully verify your instructions at the architecture level or for domain-specific architectures. Then formal tools became a necessity. Similarly, for portable stimulus, you get to a critical mass where you want to left shift. That’s when you really hit that inflection point where you should feel that SV/UVM is not the solution you are looking for. You realize that with SV/UVM solutions, you cannot do SoC verification. The only way you can verify these is with portable stimulus. But as long as people have an option they are happy with, and that can be used to tape out a chip, they will tend to stick to that path.
Schirrmeister: Emulation was always there, but the growth has continued. It flattened a little bit around 2020, but now it’s really AI and data centers, because you really need to verify the workload. Perhaps there’s a point where portable stimulus can generate more data for it, more of a verification payload, where we might see a resurgence of similar technologies.
Nightingale: You bring up a good point about portable stimulus. There is a paradigm shift that’s needed, where if you build and verify a piece of IP in-house that’s going to be re-used, it shouldn’t be seen as verified unless it’s had some third-party VIP on it. This is where we will come to with chiplets. Arm is bringing out their chip-to-chip specification, but we should not believe the specification is fully verified by them. They need external verifiers, and this is the paradigm shift. They need guys from Synopsys and other EDA companies to come along and say, ‘This is our interpretation of your specification. This is how we think it should work.’
Schirrmeister: On top of that, you have the chipset architect. You have CSS by itself, and you have CSA, which defines who holds which components, and so forth. Can I rely on that AI accelerator to do that for me? Who is accessing the memory. That adds another layer of complexity.
Nightingale: An exchange of VIP is necessary. We need to go from a single provider philosophy to picking a verification partner for future IP. Now you verify with your interpretation of the interfaces, and then we can have more confidence because our interpretation gets tested, your interpreting gets tested, and we actually find issues where we thought ‘this sentence in the spec’ actually means different things to different people.
Schirrmeister: The cynic in me then looks back at things like race conditions in simulators. We can’t even agree on how to implement the simulator, because it’s the way the language reference manuals are written. Even those are not complete.
SE: Is there optimism that verification will get better in the future? Is that because of AI?
Allan: It is because of AI. I’ve been wishing for years that the dead cycles on the CPUs in my machine could do some work for me between keystrokes. It’s as simple as that. Computers are faster than our brains. We are talking about whether we can change the level of symbolic architecture, PSS for example, or granularity of the LEGO bricks 3D-IC, but if we can just get brute force machinery to do the hard work for us, then we can verify anything. This is where AI comes into the picture. In terms of independence, the software guys solved this decades ago by introducing Extreme Programming, or pair programming. You work alongside one really good human brain, or perhaps two AI Agents, each programmed differently, each with an orthogonal set of thinking, helping the expert designer get it right the first time, get it correct by design first time. That’s just an example of how we might use agentic AI to be part of our solution in the future. This is what we’re working on. We’re working on pieces of this, and we have a vision for how it might come about.
Schirrmeister: But it’s not only AI. AI is one hopeful thing. We are doing these things where tests are being generated without AI. AI helps to get us into more cases, but it’s not always AI.
Allen: AI is just another algorithm.
Dhene: I would say it has three different ways. One is AI’s contribution to the whole ecosystem or infrastructure. We need to have earlier verification. Design and verification have been separated, but with agentic AI, they are going to be tightly coupled. We are going to see a significant reduction in terms of black-box verification, where you verify without the knowledge of design implementation. We will be moving toward white-box verification, especially with the various silicon agents working alongside a formal agent or simulation agent, working in harmony with the human in the center. That is going to make a significant impact, beyond what an expert can do at this point. That is in terms of the whole ecosystem. AI is contributing significantly to point tools, as well. AI in simulation, AI in formal like in simulation, now you can get to the same coverage with fewer tests so that you can significantly increase the throughput. Individual point tools are doing well because of AI. Lastly, in terms of a DV engineer or a design engineer, the days of handwritten unit test cases are numbered, so even from that perspective, AI is making significant progress.
Schirrmeister: That’s what I meant by test automation, but then you said something very important. The user remains in the center. AI is not replacing people. I’m not worried, if I am a verification engineer, about my job being replaced by AI. I’m worried about my job being replaced by another person who knows how to use AI.
Dhene: AI is not replacing jobs. It is changing the roles of people. There are certain roles that don’t need to be there because of AI, but people are still needed.
Schirrmeister: It’s really a problem of orchestration. Now you have this orchestration task. It’s getting there.
Mueth: It is a workflow transformation. As part of that workflow transformation, you’re now in an immersive environment. Instead of driving a tool, you’re looking at task-driven tools, and then you are left shifting some of the verification. If you have enough agents working for you, as you’re designing, some of these things are going to get flagged without having to run a lot of back-end verification. You will always have verification. It’s absolutely needed. But it is left shifting something.
SE: At what point do the surveys agree that we have turned the corner and are now ahead of the problem?
Schirrmeister: We first have to define what first-pass success means.
Mueth: I wish we had a database of all the things that caused a re-spin, because then you can look for correlations, see if there are certain use cases, and exactly which use cases are driving the problems.
Dhene: Some companies are much better than others in terms of first-pass silicon. That data is quite interesting to look at. We see companies that have adopted the latest tools and methodologies, which can adapt themselves quickly to these rapid changes. Those have done quite well.
Schirrmeister: Functional issues are stable in terms of the number of respins they cause. But there is an increasing array of other issues that are causing re-spins. The figure says that 15% of designs do not have any of these issues. I don’t question the data. We just need to dig into it more.
SE: What do you think is the most important change we’re going to see in verification in the next year or two?
Nightingale: AI with mutation capability to get into those corner cases, and then the aggregation of marginal gains from all the technologies being developed. They are each adding together, and that would help that curve over time.
Schirrmeister: Agentic AI for verification. For the last 25 or 30 years, we have not been able to get ahead with verification in front of design. Unfortunately, even with all the coming advancements, I don’t see any reason why it will.
Allan: Agentic AI is a given, bringing together software and RTL design and verification. We separated them decades ago. We’re bringing them back together now with software-aware verification, because software is the critical component.
Mueth: Agentic AI, but also an expansion to the system level. Doing a chip by itself, maybe that’s more straightforward. Everything gets locked up when you’re doing higher-level integrations and more packaging and unknowns – multi-physics.
Dhene: Agentic AI driving more toward white-box verification will be a game changer.
Rao: From the chiplet perspective, we definitely need to capture more knowledge, which will be in the form of simulation models. They are required to do multi-chiplet simulations. We also need something that is power aware.
The post The Future Of Verification appeared first on Semiconductor Engineering.