Every chip in your stack is fighting physics
Every digital chip does the same expensive trick: it keeps electrons in order while the environment pushes them toward disorder. As transistors shrink and AI workloads grow, the cost of keeping bits stable keeps rising.
Thermodynamic computing starts from a different premise. Instead of suppressing thermal noise, it uses it. Instead of treating randomness as a defect, it treats randomness as a computational resource.
How it works, in engineer terms
A traditional CPU computes by forcing a system into a precise state. Logic gates, clock edges, memory cells: all about making the machine deterministic.
A thermodynamic computer does controlled sampling. Its hardware naturally evolves through a probability distribution, and the computation is encoded in that distribution. If your workload is probabilistic, sampling is the job, not a side effect.
A 2025 Nature Communications paper demonstrated this with an 8-cell stochastic processing unit on a PCB. They ran Gaussian sampling, matrix inversion, uncertainty quantification, and Gaussian process regression. The paper argues future thermodynamic computers may outperform digital systems in speed or energy efficiency once the hardware scales.
The physics in one table
| Concept | What it means |
|---|---|
| Landauer’s Principle | Erasing information has a minimum cost: kBT ln 2 per bit. Every logically irreversible operation costs energy. Source |
| Classical computing | Spends energy to keep state stable |
| Quantum computing | Spends engineering effort to suppress noise and preserve coherence |
| Thermodynamic computing | Builds computation directly out of stochastic dynamics. Noise is the medium, not the enemy |
Berkeley Lab (2026) puts it bluntly: thermodynamic computing is “noise-powered.” Training is expensive (their work required 96 GPUs on Perlmutter), but inference on the trained physical hardware can be very low energy.
The startups
| Company | Founded | Funding | What they build | Source |
|---|---|---|---|---|
| Extropic | ~2023 | Undisclosed | Thermodynamic sampling unit (TSU). Uses thermal fluctuations for probabilistic AI workloads. Co-founded by Guillaume Verdon (ex-Google Quantum AI) | WIRED, extropic.ai |
| Normal Computing | 2022 | $35M seed + $50M (Samsung Catalyst) | CN101 chip (taped out Aug 2025), targets multimodal diffusion GenAI inference. Founded by Google Brain/X engineers | Fortune, normalcomputing.com |
Normal Computing’s CN101 tape-out moves thermodynamic computing from whiteboard theory into silicon schedule reality.
Computing paradigms compared
| Paradigm | Core resource | Best fit | Main constraint | Status |
|---|---|---|---|---|
| Classical digital | Deterministic state, clocked logic | General-purpose, mature stacks | Energy, data movement, heat | Dominant, increasingly power-bound |
| Quantum | Coherent quantum states | Chemistry, optimization, cryptography | Decoherence, cryogenics, error correction | Narrow, operationally hard |
| Neuromorphic | Event-driven, brain-inspired | Edge inference, sensory, sparse workloads | Programming model, tooling | Useful in niches |
| Thermodynamic | Controlled stochastic dynamics | Probabilistic inference, generative AI, uncertainty | Equilibration time, scalability, training | Young, real, far from broad deployment |
What changes if inference gets 10-100x cheaper
| Effect | Why it matters |
|---|---|
| More models move from batch to always-on | Continuous evaluation, richer retrieval, more frequent re-ranking become economical |
| Uncertainty becomes practical | If hardware natively samples distributions, probabilistic outputs become cheap |
| Edge and local deployments improve | Lower power means less cooling, smaller hardware, more privacy |
| Model architectures may shift | Bayesian methods and generative sampling pipelines become viable again |
Realistic timeline
| Period | What to expect |
|---|---|
| 2025-2027 | Prototypes, developer demos, narrow pilots |
| 2027-2029 | First commercial niche deployments if silicon works and tooling improves |
| 2030+ | Broader use in probabilistic inference if economics prove out |
The Nature Communications prototype is 8 cells on a PCB. The paper acknowledges scalability limitations including inductors and transformers. Startup roadmaps are more aggressive, but roadmaps are not silicon.
Honest take
Thermodynamic computing is not the next GPU. It is its own attempt to answer: what if thermal noise is not a flaw but part of the compute fabric?
That question matters because AI is increasingly a power and cost problem, not just an algorithm problem. Too early to invest. Too important to ignore.