Interesting article. A crucial question for me seems to be whether aggregates of existing ANNs suffice to model real neurons, and as a corollary whether such aggregates can self-organise out of existing training. If this is a yes, then perhaps a smooh path still exists to human-like AGI. If not, the question is whether a different kind of AGI is still available.

It's been clear for a long time that neurons behave fundamentally differently to those in ANNs. One very salient difference is that neurons fire, rather than just setting an output level, "computing" by what I call pulse-frequency modulation. I believe an effort is underway to build a physical model that functions the same way, in an effort to understand this kind of processing. Does this difference matter? I just don't know.

A mystery to me is that ANNs are so wildly successful, despite being so different. Some networks used in image-processing are inspired directly by patterns of connection in the retina and visual cortex, and they work well, despite the deep assumptions being false. At the same time, ANNs can take outrageous amounts of effort to train, while natural neurology can often learn and make inferences from a single example.

Something is clearly wrong in current architectures, yet they continue to make progress and often outperform humans in specific areas. I suspect that the field will not reassess its fundamental assumptions while this continues to be the case, especially if it turns out that aggregates of ANNs can spontaneously organise to reflect natural neurons. Even if this is not the case, I think we are facing what Kuhn would describe as a stable paradigm, with most work focussed on solving puzzles.

A new paradigm may not force its attention on ML engineers until they become truly blocked by the existing one.

Software engineer, photographer, cook, bedroom guitarist and karateka