Some pertinent objections. I still have a couple of provisos, however.
The first is that this concept of dualism is, in fact, its converse. It’s a bit of a philosophical point, but a digital or analogue neural network is still a physical system. It also often goes missed that a neural network in software, once developed, can just as easily be coded into hardware, using electronic substitutes for neurons.
Secondly, some machine learning systems are already extrapolating. Agents competing with fighter pilots in simulation already come up with manoeuvres that a human pilot would use, or even new ones. Deep Mind supposedly used tactics not seen among human players.
Finally, some systems such as self-driving cars and robots are already embodied, and already integrate whole systems from perception to motor actions.
We are a little fixated on human intelligence. Human-like AI will certainly come, if perhaps not in my remaining lifetime. An AGI that arises without forewarning, however, could be very different to a human mind. Indeed, minds not fully human may be a large part of the POINT of AI. We don’t want them getting bored while scanning for terrorist communications, or playing practical jokes, after all.