Pull to refresh
Logo
Daily Brief
Following
Why
Neuromorphic computers master physics simulations

Neuromorphic computers master physics simulations

New Capabilities
By Newzino Staff |

Brain-inspired chips solve equations that once required supercomputers

February 14th, 2026: Breakthrough Gains Wider Recognition

Overview

For decades, simulating the physics of airplane wings, nuclear weapons, or weather systems required warehouse-sized supercomputers consuming megawatts of power. Researchers at Sandia National Laboratories have now demonstrated that brain-inspired neuromorphic chips can solve these same equations—the partial differential equations underlying nearly all physics simulations—with a fraction of the energy.

The breakthrough, published in Nature Machine Intelligence, arrives as data centers' electricity consumption threatens to double by 2026, driven largely by artificial intelligence. Neuromorphic computing, which mimics how biological neurons communicate through discrete electrical spikes rather than continuous signals, could enable complex scientific simulations on devices small enough to fit inside a drone or satellite—or scale up to replace the most energy-hungry machines in the nuclear weapons complex.

Key Indicators

1.15B
Artificial neurons
Number of neurons in Intel's Hala Point system, now operational at Sandia—roughly equivalent to an owl's brain.
99%
Parallelizable
The NeuroFEM algorithm's efficiency in distributing computations across neuromorphic cores.
12 years
Hidden connection
How long the link between cortical network models and partial differential equations went unrecognized.
2,600W
Hala Point power
Maximum power consumption for Sandia's neuromorphic system—compared to 13,000,000W for a typical supercomputer.

Interactive

Exploring all sides of a story is often best achieved with Play.

Ever wondered what historical figures would say about today's headlines?

Sign up to generate historical perspectives on this story.

Sign Up

Debate Arena

Two rounds, two personas, one winner. You set the crossfire.

People Involved

Brad Aimone
Brad Aimone
Distinguished Member of Technical Staff, Sandia National Laboratories (Leading neuromorphic computing research for national security applications)
Brad Theilman
Brad Theilman
Computational Neuroscientist, Sandia National Laboratories (Developed NeuroFEM algorithm for physics simulations)

Organizations Involved

Sandia National Laboratories
Sandia National Laboratories
National Security Research Laboratory
Status: Operating world's largest neuromorphic systems for national security research

Department of Energy laboratory responsible for nuclear weapons engineering and national security science.

Intel Corporation
Intel Corporation
Technology Company
Status: Building world's largest neuromorphic hardware platforms

Semiconductor manufacturer developing neuromorphic Loihi processor family since 2017.

International Business Machines Corporation (IBM)
International Business Machines Corporation (IBM)
Technology Company
Status: Pioneered neuromorphic computing with TrueNorth and NorthPole chips

Technology company that developed TrueNorth neuromorphic chip under DARPA funding.

Timeline

  1. Breakthrough Gains Wider Recognition

    Publication

    Science news outlets report on Sandia's demonstration that neuromorphic computers can perform physics simulations previously requiring energy-intensive supercomputers.

  2. NeuroFEM Paper Published

    Research

    Theilman and Aimone publish "Solving sparse finite element problems on neuromorphic hardware" in Nature Machine Intelligence, demonstrating that spiking neural networks can solve partial differential equations with 99% parallelizability.

  3. Sandia Partners with SpiNNcloud

    Partnership

    Sandia announces collaboration with German startup SpiNNcloud to explore neuromorphic computing for nuclear deterrence applications.

  4. Hala Point Arrives at Sandia

    Infrastructure

    Intel delivers Hala Point neuromorphic system to Sandia—1,152 Loihi 2 chips with 1.15 billion neurons in a chassis the size of a microwave, supporting 20 petaops at 2,600 watts maximum.

  5. Sandia Team Wins International Neuromorphic Prize

    Recognition

    Brad Aimone leads Sandia team to international prize for demonstrating neuromorphic solutions across heat transfer, medical imaging, and financial problems.

  6. IBM Unveils NorthPole Chip

    Hardware

    IBM releases NorthPole neuromorphic processor optimized for 2-, 4-, and 8-bit precision, outperforming conventional architectures on image recognition at reduced energy cost.

  7. Intel Launches Loihi 2

    Hardware

    Intel releases second-generation Loihi chip with improved performance and efficiency, enabling larger-scale neuromorphic systems.

  8. Intel Releases First Loihi Chip

    Hardware

    Intel unveils Loihi neuromorphic processor with 131,072 artificial neurons and 130 million synapses, built on 14-nanometer technology.

  9. IBM Unveils TrueNorth Chip

    Hardware

    IBM releases TrueNorth neuromorphic processor with 1 million neurons, 256 million synapses, and 5.4 billion transistors—running on just 70 milliwatts of power.

  10. Balanced Excitation-Inhibition Model Published

    Research

    Neuroscientists publish cortical network model balancing excitatory and inhibitory signals—the algorithm Sandia would later link to physics equations 12 years later.

  11. DARPA Launches SyNAPSE Program

    Funding

    The Defense Advanced Research Projects Agency (DARPA) awards $10.8 million to IBM and HRL Laboratories to develop brain-inspired computing hardware, ultimately investing $52 million through 2014.

  12. Neuromorphic Computing Concept Emerges

    Research

    Carver Mead and Misha Mahowald at Caltech develop first silicon retina and artificial neurons, establishing neuromorphic engineering as a field.

Scenarios

1

Neuromorphic Systems Enter Production for Scientific Computing

Discussed by: Intel research division, Department of Energy Advanced Simulation and Computing program, IEEE Spectrum

Within 3-5 years, neuromorphic systems become standard tools for specific physics simulation tasks at national laboratories. Early adoption focuses on problems well-suited to the architecture—fluid dynamics, structural mechanics, electromagnetic fields—where energy efficiency matters more than raw precision. Supercomputers remain essential for the highest-fidelity simulations, but neuromorphic chips handle exploratory runs and edge deployments.

2

Neuromorphic Supercomputer Achieves Parity with Traditional HPC

Discussed by: Sandia National Laboratories, The Register, Interesting Engineering

A decade-scale effort produces the first neuromorphic system capable of full-scale nuclear stockpile simulations, reducing the energy footprint of the weapons complex's computing infrastructure by an order of magnitude. This requires solving harder PDEs (nonlinear, coupled systems) and achieving numerical precision comparable to current supercomputers—both active research challenges.

3

Edge Deployment Transforms Autonomous Systems

Discussed by: IEEE Spectrum, Prophesee, autonomous vehicle researchers

Neuromorphic physics simulation enables autonomous drones, satellites, and vehicles to run real-time environmental models onboard rather than relying on pre-computed data or cloud connections. A drone could model airflow around obstacles; a satellite could simulate atmospheric conditions. The low power consumption makes this feasible for battery-powered systems.

4

Technical Barriers Limit Scope to Niche Applications

Discussed by: HPCwire, academic critics of neuromorphic computing

Neuromorphic systems prove excellent at specific problem classes but cannot generalize to the full range of PDEs required for comprehensive scientific simulation. Precision limitations, difficulty with nonlinear equations, and the need for specialized algorithm development for each problem type confine the technology to narrow applications while traditional computing continues to dominate.

Historical Context

DARPA SyNAPSE Program (2008-2014)

November 2008 - August 2014

What Happened

DARPA invested $52 million to develop brain-inspired computing hardware, partnering with IBM, HRL Laboratories, and universities. IBM's team built successive prototypes, culminating in the TrueNorth chip with 1 million neurons on a single die consuming just 70 milliwatts—about one ten-thousandth the power density of conventional processors.

Outcome

Short Term

TrueNorth demonstrated that neuromorphic hardware could achieve radically better energy efficiency for specific tasks like image recognition.

Long Term

The program established neuromorphic computing as a serious research field, leading Intel, Samsung, and others to launch competing efforts.

Why It's Relevant Today

Sandia's NeuroFEM breakthrough builds directly on hardware that descended from SyNAPSE—Intel's Loihi architecture emerged partly in response to TrueNorth's success.

ENIAC and the Birth of Scientific Computing (1945)

February 1946

What Happened

The Electronic Numerical Integrator and Computer (ENIAC) began operation at the University of Pennsylvania, consuming 150 kilowatts to perform calculations for hydrogen bomb design. It replaced months of human computation with hours of machine work, but required its own electrical substation.

Outcome

Short Term

ENIAC proved electronic computers could solve physics problems far faster than any alternative, launching the era of computational science.

Long Term

Scientific computing scaled up over 80 years to exascale supercomputers—but energy consumption scaled with it, creating today's data center power crisis.

Why It's Relevant Today

Neuromorphic computing represents a potential architectural break from the von Neumann paradigm that has dominated since ENIAC. If successful, it would be the first fundamental shift in how we compute physics since 1945.

GPU Computing Revolution (2006-2012)

2006-2012

What Happened

Researchers discovered that graphics processing units, designed for video games, could accelerate scientific simulations and machine learning by 10-100x compared to conventional processors. NVIDIA released CUDA in 2006; by 2012, GPUs dominated high-performance computing for certain workloads.

Outcome

Short Term

GPUs enabled the deep learning revolution by making neural network training practical at scale.

Long Term

GPU computing became the default architecture for AI, but inherited the energy efficiency limitations of conventional silicon—leading to today's power consumption concerns.

Why It's Relevant Today

Neuromorphic computing could be the next architectural wave after GPUs. Just as GPUs found unexpected applications beyond graphics, neuromorphic systems designed for AI inference may transform scientific computing.

12 Sources: