It's not clear how many classical calculations a single human neuron is equivalent to. There's a strong analog component in multiple domains (strength, frequency, timing) and each neuron can connect to up to 15,000 other neurons. Assuming the brain's neurons are (probably unrealistically) fairly 'digital' we get an estimation of the human brain being equivalent to 1 exaflop (this is the currently accepted lower bound, and rather disputed as being too low). Current TPUv4 pods currently provide approximately 9 exaflops. I don't think we're currently reaching human-level learning rates. There’s currently no accepted “upper bound” on estimates of FLOP equivalency to a human brain.
> Though we have been building and programming computing machines for about 60 years and have learned a great deal about composition and abstraction, we have just begun to scratch the surface.
> A mammalian neuron takes about ten milliseconds to respond to a stimulus. A driver can respond to a visual stimulus in a few hundred milliseconds, and decide an action, such as making a turn. So the computational depth of this behavior is only a few tens of steps. We don't know how to make such a machine, and we wouldn't know how to program it.
> The human genome -- the information required to build a human from a single, undifferentiated eukariotic cell -- is about 1GB. The instructions to build a mammal are written in very dense code, and the program is extremely flexible. Only small patches to the human genome are required to build a cow or a dog rather than a human. Bigger patches result in a frog or a snake. We don't have any idea how to make a description of such a complex machine that is both dense and flexible.
> New design principles and new linguistic support are needed. I will address this issue and show some ideas that can perhaps get us to the next phase of engineering design.
> Gerald Sussman Massachusetts Institute of Technology
My understanding is that TEPS were used to determine computing for these types operations, rather than FLOPS, as they were more useful specifically for that comparison. There metrics put them in the same order of magnitude; however, as stated before, these miss the point by quite a bit, since much of the 'computations' humans do are quite irrelevant (taste, smell, etc) to producing language or solving algorithmic problems, etc.
For example, the cerebellum is 50-80% of what people keep quoting here (Number of neurons in the brain) and is not activated much in language processing.
Wernicke's area spans just a few percent of the cortical neurons.
The amount of pre processing we do by providing text is actually quite enormous, so that already removes a remarkable amount of complexity from the model. So, despite the differences between biology and ANNs, it's not unreasonable what were seeing right now.
Look this is great thinking... I don't want to diminish that but think of a brain like an fpga (parallel logic) not a synchronous chip with memory and fetch decode execute style steps....
We do things in a massively parallel way and that is why and how we can do things quickly and efficiently!
https://www.youtube.com/watch?v=HB5TrK7A4pI is a recently posted video to HN Frontpage which was summarized as such:
> Though we have been building and programming computing machines for about 60 years and have learned a great deal about composition and abstraction, we have just begun to scratch the surface.
> A mammalian neuron takes about ten milliseconds to respond to a stimulus. A driver can respond to a visual stimulus in a few hundred milliseconds, and decide an action, such as making a turn. So the computational depth of this behavior is only a few tens of steps. We don't know how to make such a machine, and we wouldn't know how to program it.
> The human genome -- the information required to build a human from a single, undifferentiated eukariotic cell -- is about 1GB. The instructions to build a mammal are written in very dense code, and the program is extremely flexible. Only small patches to the human genome are required to build a cow or a dog rather than a human. Bigger patches result in a frog or a snake. We don't have any idea how to make a description of such a complex machine that is both dense and flexible.
> New design principles and new linguistic support are needed. I will address this issue and show some ideas that can perhaps get us to the next phase of engineering design.
> Gerald Sussman Massachusetts Institute of Technology