Physical AI holds the promise of making everything from robots to a slew of mobile edge devices much more interactive and useful, but it will significantly alter how systems are designed, verified, and monitored.
Physical AI systems need to work both independently and together. They need the ability to make decisions quickly and locally, typically using much less power than other types of AI and with a much tighter focus than many large AI systems today. At the same time, they also need to be able to tap into the almost unlimited processing of the cloud when necessary, and to make probabilistic choices in real time based on data that may be insufficient. In addition, when coupled with agentic AI, these devices will be need to be able to interact with humans using natural language and gesture commands that are not always precise.
Bridging these various domains, which in the past have been separate for the most part, will pose a series of new challenges for chip design teams. They will need to understand everything from the AI algorithms that must be integrated into the physical design process to unique demands placed on chip architectures. And they will need an understanding of how to leverage AI-driven tools, which will be used for optimization, placement, and routing to meet tighter power, performance, and area constraints, and to handle extreme MAC computations, massive parallelism, and the integration of multiple types of memories.
“Over the last couple of years we heard a lot about AI, but from the data center side, and we know there are two sides to that,” observed Mick Posner, senior product marketing group director at Cadence. “There’s training, where models are trained, and that’s the NVIDIA story. But the actual usage is inference. Physical AI moves AI processing to the edge, to the actual device, and what defines physical AI is your interaction with your AI. It is latency-sensitive and needs to make decisions very quickly. Hence, the processing is done on the device, on the edge. In autonomous vehicles, this is your autonomous drive. It needs to know what’s going on around it, and it doesn’t have time to say, ‘Oh, I wonder what this is? I’ll go off to the data center to see if it knows.’ It needs to do it on the edge. And say your drone is flying and it’s navigating itself to your house. That needs to make decisions on the fly.”
Many technologies are required to enable physical AI. One is natural language processing. “You are telling your robot, ‘I would like you to get me a cup of milk.’ It needs to understand that, so a large language model is running there,” Posner said. “The physical side of it is taking action. ‘I am picking up the milk from the fridge.’ Similarly, in aerospace and defense, that processing needs to happen in real-time for object detection, target acquisition, etc.”

Fig. 1: Physical AI systems span multiple markets. Source: Cadence
For physical AI hardware, there is an AI perspective and a general hardware perspective. “Physical AI hardware can be thought of in two ways,” noted Hezi Saar, executive director, product management, mobile, automotive, and consumer IP at Synopsys. “The first is the ability to accommodate the changes in AI algorithms or AI machines. This includes all the learnings that may need to happen on the device, such as how the pipelining will work, how the data will be pre-processed, how you are going to use your resources in the best way, and how the system will perform and not crash. All of that needs to be thought through. It’s very difficult, so allowing a lot of margins and a lot of ways to accommodate things is very important.”
Physical AI often is viewed as a subset of edge computing. “The interaction with humans or interaction with the environment is what differentiates physical AI,” Saar said. “Edge AI is connected to the cloud, and it’s on the device. Physical AI is on the device and has unique capabilities beyond the regular edge AI. If I use AGI, as well — initially right in the phone, like an AI assistant — it’s mostly using software capabilities to interact. We could use audio to interact with that, or even text to get the output we want for that assistant. But when you say physical, it can relate to multiple kinds of applications. It has some kind of physical component, and it accepts information from the environment or influences the environment. In the physical domain, we can even look at robo-taxis as one of them. Robotics and drones also have these kinds of capabilities, with environmental inputs or outputs influencing them. We have AI agents that we create, which are software AI agents. The physical AI agents can be physical, and they can do work for us. The big change will be the social acceptance of how we go into it as a society, how people are going to accept something like that.”
Physical AI got its initial footing in industrial applications. “In a factory, let’s say you have multiple furnaces that need periodic maintenance and monitoring,” said Sathishkumar Balasubramanian, head of product for IC verification and EDA AI at Siemens EDA. “People would go in and check, climb the ladder, etc. With physical AI, we can predict a fault before it happens, or we can predict the maintenance cycle based on reinforcement learning, prior experience, and other data that we are monitoring through the sensors. We can learn from thousands of parameter combinations what will cause a failure for a given configuration, or when preventive maintenance is due. We can predict with physical AI — being able to monitor in real-time and get real-time data, and based on the model we have on Furnace A — that given it’s operating at a higher temperature because of its needs, it will require a replacement of a certain part at this point in time. You can pretty much predict everything. The whole idea of physical AI is to enhance the life experience of people — of the end-product and the people using it. In this case, you’re making sure the factory preventive maintenance goes flawlessly. That’s the end goal. Then, you don’t need as many people doing higher-risk jobs, where they have to jump in and figure out what went wrong or how to monitor that. It’s a confluence of industrial IoT with industrial AI on the edge.”
Physical AI at the system level
There are also system-level considerations for physical AI, because it is doing a lot more localized edge computation and a lot more communication. “It’s not just the loop to the data center and back, which you still have to deal with and is no different than any other edge device,” said Michal Siwinski, chief marketing officer at Arteris. “But these systems are going to become intertwined. It’s like the hardware version of agentic AI when you’re dealing with these things. They’re not in isolation anymore. They’re part of their own system at the edge, behaving and communicating. You have huge bandwidth and a huge amount of communication. You have all kinds of different compute. Physical AI basically means the system has smarts. It means you’re building one gigantic supercomputer in the middle of it, and a lot more distributed computing needs to be synchronized. When we think through the elements, there is still the problem of the car. You have to deal with lidar, radar, and vision. You probably have all three for a variety of reasons because chances are, you’re creating a mission-critical system where liability and regulation are going to force you to do a whole bunch of stuff sooner or later, particularly in things where you have a high ability to impact lives. So you’ve got to deal with all of that well.”
Physical AI involves movement. It requires sensors, MEMS devices, and analog/mixed-signal systems. “This means we now must talk about mixed-level signals, and figure out how that information is flowing,” Siwinski said. “On top of that, you have to take all of that and orchestrate it. It could be distributed or non-distributed, and it will have a centralized brain because of all these disparate systems. You really cannot let it go. If you let it go by itself, it will probably go haywire, so you still need to orchestrate and synchronize, which means it’s like a data center on the go. And how does that work, given that you have the battery and energy realities of an edge device, while still trying to compute and be able to potentially have some training, not just inference? You want these systems not to be completely dumb, and you want them to be more autonomous to some degree. So you have to give them some localized training ability, and that means different software like TinyML and those kind of things, besides just connecting the factory, so that when you plug it into the wall you can sync up to the data center and upgrade all the latest firmware and the like. In reality, when you’re operating and potentially charging, it has to work continuously, versus breaking all of a sudden. This is a whole different ball of wax than all of those systems on their own.”
To put this in perspective, physical AI is the convergence of AI and the physical world. “In the past, AI was confined to the digital realm,” observed Artem Aginskiy, general manager of high-performance processors at Texas Instruments. “It collected information, processed it, and output answers, but it could not reason or take action in the physical world. With physical AI, the data processing that used to occur primarily on remote servers is now happening locally, resulting in real-world actions, like a robot moving a box or a car stopping autonomously. Similar to the concept of edge AI, which is the capability of applications at the network edge to use AI locally through embedded processors like MCUs and MPUs and software, the key difference is that physical AI is not just sensing and processing, but also refers to the actions those applications take. Both physical AI and edge AI bring more intelligence and automation to where they can have the biggest impact.”
Physical AI applies artificial intelligence directly to the real world, where machines must sense, interpret, and act in environments shared with humans, thereby fusing multi-modal sensing, edge computing, and continuous learning so that robots, vehicles, or infrastructure assets can act autonomously and safely.
“While most AI systems are running out of new data to learn from, physical AI is just getting started, unlocking billions of previously untapped signals from the physical world and turning them into actionable insight,” said Alex Hawkinson, founder and CEO of BrightAI. “Physical AI is a real-time operating system for the physical world, making physical assets like pipelines and power poles fully observable and data-driven.”
Different data, different processing
The data used for edge AI and physical/endpoint AI is different, and so is what can or needs to be done with that data.
“When you do AI at the edge, you get much lower latency, which means your data rates go way down and you’re not sending as much data back,” said Carlos Morales, vice president of AI at Ambiq. “From a power point of view, that means your power goes down because of radio frequency transmission. You can’t really get much better there, but if you have basically pre-computed, you can transmit a lot less. For example, transmitting your ECG data is 40 megabytes per minute versus transmitting your heart rate at 40 bytes per minute, so doing the AI there saves power.”
Physical AI developers are hungry for the ability to tap into that real-world data. “We see a lot of focus on power and power-efficient designs on the edge so that the data can be monitored, because physical AI is dependent on the interaction with the real world and with electronics,” said Siemens EDA’s Balasubramanian. “It’s really specific to the given use-case scenario. For example, an industrial robot might have different scenario requirements, but the fundamental thing is being able to tap into real-world data efficiently and to make decisions fast, because in physical AI, if I say, ‘Robot, go pick it up,’ I can’t wait for five minutes before the robot understands and does it. It needs to make decisions fast and autonomously, which means we need very efficient domain-specific models and efficient code at the edge, in that device, in that physical AI robotics system, so there is no lag in behavior or function.”
Latency is key. “Physical AI robots now have sensors on every one of their joints,” said Ambiq’s Morales. “AI is being deployed there, and there is a control loop. Instead of going back to the brain, which is how an octopus does it, it has a little brain in every arm. You don’t want one brain slowing the whole thing down. This means physical actuation, like anomaly detection right at the sensor, is important. Also important is turning on different models, depending on events. Think about a health-care monitoring system that is sampling some bio signal every five minutes because it detects something. Maybe there’s a correlation between starting a run. You want to measure it at a much higher resolution around the run. You’re detecting that you began an event like a run, a sleep, or whatever, and then you go into high-resolution mode for a few minutes. You’re still saving power because you’re not doing it forever. Then the device knows this person is not having a heart attack and goes back to sleep. We see that pattern with smart speakers and other devices.”
Industry insiders expect a surge of robots executing AI-driven perception, planning, and control in the real world. “Here, chip companies face demand for heterogeneous, deterministic edge SoCs (CPU/GPU/NPU/DSP) that meet hard end-to-end deadlines for sensor fusion, policy, and actuation under tight power and safety constraints,” said William Wang, CEO of ChipAgents. “Front-end/verification priorities shift to (1) latency-bounded dataflows (scratchpads, DMA, QoS, interrupt latency); (2) mixed-criticality isolation (safety islands, lockstep, ECC); (3) real-time properties in RTL (deadline/throughput assertions, WCET-style checks), and (4) ML-in-the-loop verification where perception errors are modeled alongside controller logic. EDA must co-design hardware, software, and machine learning through simulation with physics/digital twins, scenario-based stimulus generation, formal checks on control invariants, and coverage that spans corner-case environments and RTL states.”
Agentic AI tools from ChipAgents and others are used to orchestrate these requirements. They can auto-partition perception versus control between accelerators and software, synthesize RTL that honors cycle-accurate latency budgets, and generate scenario suites from simulators (e.g., dynamic occlusions, slip, delays) to stress interfaces and safety paths. They also can continuously trade off PPA vs. closed-loop stability/throughput, shrinking iterate-test-fix cycles for robotics silicon.
Physical AI shifts some verification from the lab to the field. “Models and hardware must be proven not just in simulation, but under real-world conditions like weather, wear, and human error,” BrightAI’s Hawkinson said. “That demands new design processes and continuous validation. That means ensuring sensors, chips, and AI models function together as an integrated system that can operate with minimal human oversight.”
Among the many other requirements of physical AI is privacy, especially in devices that listen. “It doesn’t hear what you’re saying, and it doesn’t really understand your words, but based on your speaking and voice patterns, it can detect incipient dementia,” said Ambiq’s Morales. “I don’t want anybody hearing about that, so privacy is important, along with security. Developers also value low latency in terms of the immediate response and the power savings.”
Conclusion
The upcoming physical AI design flood will be characterized by a wide variety of design starts. “There are a lot more and differing requirements, so the focus will be on customized chip development based on different verticals, from industrial and medical to automotive and consumer,” said Siemens EDA’s Balasubramanian. “The end requirements will be different, and the operating conditions will be different, because the moment you go into physical AI you are exposed to the elements a lot more than a regular chip. You’re not in a controlled environment, so designing a robust thing, understanding the requirements of where the end product is going to be used, will be key. Chip and system architects need to think through it, understand what the power, performance, and area envelope is, along with the operating conditions. That is driven partly by the software stack, and it guides the entire development of that particular chip or system. Success with physical AI will be about not trying to boil the ocean. Pick an area where you do well, then replicate that use case into a lot of similar use cases.”
And the challenges will go even further. “Think of that as its own monolithic design,” said Cadence’s Posner. “If you’re just doing simple object detection, like in a security camera, that’s enough. But as you get to more complex capabilities, you’ll need an AI accelerator package with it. You’ll need a CPU. You’ll need your domain-specific, or multiple of those. We have a customer who’s looking to create five different chiplet dies, but then create a range of 20 different products from those same five, because they can scale up the CPU, scale up the AI, scale up how they do the system processing. And that’s the design challenge for the future — layering all those together. With our own internal development, because we have multiple dies, we’re doing not only co-simulation across the die. We’re also doing co-emulation. These are not small designs. The system chiplet is 120mm2. The AI chiplet is 200mm2. These are significantly large designs, with millions of lines of software code running on top of them. People would typically look at that and say, ‘These are large, monolithic designs on their own, and now you want them to work together with a unified software layer?’”
To do that, EDA tools, verification, and emulation all will need to scale exponentially, because what was hard to manage as a single die is now much more difficult with multiple dies. And engineers will need to keep track of all of this, and figure out how they can divide and conquer the design of a system that can react optimally to different sensor input while also continuing to learn and evolve throughout its lifetime.
The post Multiple Challenges Emerge With Physical AI System Design appeared first on Semiconductor Engineering.




