The LocatedIn template triggers some images in my mind that might be helpful in thinking about this. I almost instantly know that this is not a syllogism, because I visualise putting something in something and can immediately see that there are circumstances in which it is implicit that moving the containers moves the object, and other circumstances in which the contained object can escape the container. It may be a heavy object in a wet bag; it may be a small bird in a cage with widely-spaced bars.
I am not sure quite why these exceptional cases immediately spring to mind, but it seems quite clear that what I do before consciously evaluating how to solve the problem is to visualise it. I strongly suspect that this is how humans solve a great number of problems. The dichotomies of top-down/bottom up, or symbolic versus neural networks, therefore, seem false to me.
It might be fruitful to think about how auto-encoders or other neural networks and classical procedural algorithms can best achieve visualisation-and-interrogation of scenarios. One could visualise one object in another, then visualise the object escaping and being trapped, before trying to match either scenario to a range of containers and captives to see if the visualisation is coherent.
It strikes me that what my mind did in the brief moment before conscious reasoning was Conjecture and Refutation. This pleases me somehow.