The Efficiency Reckoning:
Is 2026 the Year AI Hits the Wall (and the Real World)?
Welcome to 2026!
If you’re reading this, you’ve survived the “Peak of Inflated Expectations.” For the last three years, the tech industry has collectively hallucinated that if you just fed enough GPUs enough text, you could solve physics. We treated energy like it was infinite, data like it was free, and the physical world like it was just a low-resolution video game we could “solve” with a transformer model.
Well, the hangover is here.
It’s January 11th. I’m writing this from Cambridge, where the sky is grey, the coffee is strong, and the reality of Physical AI is finally setting in. The era of “magic” is over. The era of hard engineering has begun.
Looking at the news from the first week of the year, from the floors of CES in Las Vegas to the boardrooms of energy giants, a new narrative is emerging. It’s not about scaling laws anymore. It’s about thermodynamic limits. It’s about complexity bottlenecks. And it’s about an uncomfortable truth I’ve been talking about for the past few years at the helm of my company, Secondmind: Brute force doesn’t solve the problems of the physical world.
This is the inaugural post of G on AI. My goal isn’t to hype the latest chatbot features (unless they really matter) or convince you that this new generation of software will fundamentally and rapidly change the world (it already has). I aim to find the real signals in the AI noise and provide a pragmatic and practical perspective. I am anti-hype, pro human and an evangelist of efficiency. And I try to balance optimism with just enough skepticism in the pursuit of creating better things that actually work.
Here are the three signals from the last week that tell me the rules of the game are changing.
1. The “Shift Left” Reality Check: Doubling Down on Virtualization
Let’s start with a piece of news that might look iterative on the surface but is actually fundamental. At CES, Synopsys announced major expansions to their virtualization ecosystem, announcing deep partnerships with Samsung, NXP, and Texas Instruments.
Now, if you’ve been in automotive engineering for more than five minutes, you know that Synopsys Virtualizer Development Kits (VDKs) aren’t new. The concept of the “Shift Left”—moving testing earlier in the cycle by validating software virtually before the physical silicon is baked — has been the industry’s quest for over a decade.
This week’s news makes it clear: The industry isn’t giving up.
Despite the immense difficulty of accurately simulating the chaos of the real world, OEMs and Tier 1s are doubling down on virtualization because they have no other choice. We are trying to validate 100+ million lines of code in vehicles that generate 25 terabytes of data an hour. The old “build-and-break” physical prototyping model is mathematically bankrupt. You simply run out of time.
By integrating Samsung’s ISOCELL sensors and NXP’s latest compute architectures into the virtual workflow, Synopsys is signaling that the only way out of this complexity crisis is through it. They, like others that have tried before them, are building the runway for the Software-Defined Vehicle to actually take off.
But here is the counter-intuitive trap:
Moving complexity from the track to the cloud doesn’t solve the complexity; it just digitizes it. If you swap a billion miles of physical driving for a billion miles of virtual driving, you haven’t gotten smarter—you’ve just traded a gasoline bill for an AWS bill.
This is where the industry gets it wrong. Virtualization is necessary, but it’s insufficient. High-fidelity simulation generates petabytes of synthetic noise. To make this work, we don’t need more data. We need data-efficient AI systems like the one we’ve built - Secondmind Active Learning - that can look at a near-infinite virtual design space and say, “Don’t run those million simulations. Run these ten. These are the ones that matter.”
The victory of 2026 won’t go to the company with the biggest digital twin. It will go to the company that knows which questions to ask it.
2. The Nuclear Option: A Monument to Inefficiency
While the auto industry tries to simulate physics, the hyperscalers are trying to cheat thermodynamics.
Meta (Facebook) just signed deals to secure nearly 6.6 gigawatts of nuclear power to feed its AI data centers. To put that in perspective: 1 gigawatt powers about 750,000 homes. Meta is effectively commissioning a power infrastructure capable of supporting a major metropolitan area, just to run matrix multiplications.
Let’s apply the skeptic’s lens here. Is this a triumph of AI or is it a signal of massive algorithmic failure?
The Silicon Valley dogma is “Scale is All You Need.” But in the physical world, scale comes with a cost. The fact that our “smartest” models require the energy output of a fission reactor to function is a damning indictment of their efficiency. It is the definition of the Jevons Paradox: as hardware becomes more efficient, we don’t consume less energy; we just build more monstrously inefficient models. This is the antithesis of the engineering mindset.
The automotive industry fights for every milliwatt. The Hyundai/DEEPX chip—also announced this week—runs on-device Operational AI using just 5 watts. That is actual intelligence. That is intelligence that respects the constraints of reality.
Building a nuclear plant to run a chatbot isn’t innovation; it’s a band-aid. It’s a brute-force hedge against the fact that we haven’t figured out how to make AI efficient yet. The winners of the next decade won’t be the ones with the most uranium; it will be the ones who can do the most math per watt.
3. The End of Hallucination? Physics Strikes Back
Finally, let’s look at where the rubber actually meets the road (or where the molecule meets the receptor).
For the last two years, “Generative AI” for science has been a bit of a parlor trick. You could ask an LLM to design a battery or a drug, and it would give you something that looked plausible but violated the laws of physics. It was hallucination as a service.
This week, we saw a shift. Schrödinger partnered with Eli Lilly, and Google DeepMind opened a robotic “closed-loop” laboratory. These are a few examples of the rise of Physical AI.
The shift here is subtle but profound. We are moving from “Open Loop” (training on the internet and guessing) to “Closed Loop” (proposing a candidate, simulating it with physics, testing it with a robot, and feeding the truth back into the model).
This validates everything we believe at Secondmind. Physics is the ultimate ground truth. You cannot cheat gravity, and you cannot cheat chemistry. The integration of Lilly’s “TuneLab” into Schrödinger’s physics-based platform creates a workflow where AI proposes, and Physics disposes. It eliminates the hallucination problem because the simulation enforces the rules of reality.
Furthermore, the robotic lab is an example of how Secondmind Active Learning could work in the physical world. Instead of high-throughput screening—where you blindly test millions of compounds hoping for a hit—the AI learns the chemical space and directs the robots to test only the high-probability candidates. It reduces the search space by 80%. It does more with less.
The Takeaway: The Year of the Engineer
If 2023-2025 was the era of the “Magician”—the demo jockeys and the hype merchants—2026 is the year of the Engineer and Physical AI.
The constraints are closing in tighter. The grid is maxed out. The testing timelines are compressed. The cost of capital is real. And the “magic” doesn’t work when you’re trying to keep a car on the road or a data center from melting down.
The companies that win this year will be the ones that embrace efficiency. They will be the ones that reject the “more data” dogma in favor of “smart data.” They will be the ones that realize that software ultimately has to live in the physical world, and the physical world doesn’t care about your scaling laws.
Let’s get to work!
— G.
