In case anyone is interested, Carver Mead's 1990 seminal paper on the subject [0]. It's a fairly accessible read, and covers the power/computational efficiency trade-offs on the spectrum from natural, biological systems to manufactured analog vs. digital electronic systems.
Most of such analysis reporting brain to be more power efficient than computers talk about energy it would require to emulate brain operations in silicon. That does not sound like a fair comparison. How about the energy a brain would need to emulate a computer chip, say multiplying a billion floating point numbers?
For a fair comparison, we must do a comparison for the same neutral task, one that both machines and brains can do. It's would need discussions to define what would this be since capabilites of each still show wide differences.
Likewise, some texts assume each synapse to carry a memory of say a byte, and then claim our brain has a memory of about 10^15 bytes. A human brain cannot actually recall all that information, the latter is estimated to be at about 10-30 MB only (per an old book I read).
You should probably read the paper again. The comparison Mead proposed is for the energy spent on a simplest fundamental computing event. In the case of a brain, it's a single neural pulse going through a synapse. In the case of a digital computer, it's a switching of a single transistor. In both cases, we can calculate how many events happen per second, and we know the power consumption. A brain simply does more operations per second than the fastest computer chip, while spending less energy.
You might disagree on what constitutes a fundamental computing event, and we can discuss that, but the idea seems valid to me.
An apples-to-apples comparison should not care for what the fundamental unit is, unless it can be argued to be the equal in terms of what it does to the problem being solved. The common factor needs to be the same problem given to both to see how much power is consumed. Or alternatively, we could compare what they could do with a given amount of power with a common output performance metric.
Don't you realize that brain and computer are optimized for different tasks? Should we compare them at matrix multiply, or playing chess, or writing a novel, or ...?
I think this comparison assumes that we want to do whatever brain does - the way brain does it. Because we don't know any alternative way to do it. So we try to emulate the brain (replicate neuronal operations) using transistors (or memristors, etc).
This does not make sense for some tasks, such as doing matrix multiply in FP64 precision, but it does make sense for the tasks we care about the most - whatever it is that makes us intelligent (AGI). At least until we can abstract the details of the brain operation which are not important from AGI standpoint.
We should wait for the energy comparisons to be made till we understand. :-) Emulating brain with machines or vice versa is not an interim solution.
Alternatively, perhaps the common task could today be defined based on something that Deep Learning can handle, like say visual object recognition using a pretrained model.
> brain cannot actually recall all that information
it’s not random byte addressable memory. Neither are the big DL models. But that doesn’t mean it’s not making use of the 10^15 bytes when doing things it’s good at.
[0] https://web.stanford.edu/group/brainsinsilicon/documents/Mea...
[1] https://en.wikipedia.org/wiki/Carver_Mead