Why can't computer understand me..
On our best behaviour - Hector J. Levesque
I'd generalize what the author tries to say: yes, the machine does need *imagination* (sensory memories and spaces, see many, for example ) and to incrementally learn and play with it. That subsystem is required!* I do agree also that many AI-ers, especially in NLP community didn't understand or accept that, this paper suggests that there's no common progress in the acceptance.
Some of the reasons are simply the academic and people's urge to produce papers and "results" quick, to demonstrate that you "do something". Papers, papers, papers - who cares about the real progress.
I guess some have understood that very well, the "common sense" problem is cited from decades, it is obvious, yet in the NLP they've been insisting to push a bunch of words with no relation to the real world and then ask "why it can't do proper word-sense disambiguation, given only a corpus?"
Also, IMO it's the researchers in NLP etc. who don't understand language understanding , rather than the poor computers, which, as many AI-use to say "do what they are told to".
Language teaching of a machine should be done incrementally with sensory-motor feedback and interaction, like teaching a child and not in sensory-less batch mode with a huge corpus with zero interaction, no coordinate space, just a bunch of words, because:
Todor: Natural language is a hierarchical redirection/abstraction/generalization/compression of sequences of multi-modal sensory inputs and motor outputs, and records and predictions for both.
A mere corpus has no imagination, even if it's 1 Terrawords. A small corpus, built by systematic interaction and mapped to sensory-motor system is intelligent and can explore and learn further on its own. Such as - the toddlers.
I support also the criticism of the Turing Test. Right, it is a test for fooling people - not a test of intelligence, and as I myself claimed back in 2001 (see  "Човекът и мислещата машина..." below) - a true, honest and too smart or quick AGI will quickly FAIL the test, because its output is too complex, too quick, too deep, she doesn't have some required memories (shuld lie for her childhood etc.) etc.
Finally, I will save some of my thoughts from myself...
Credits to Aaron for sharing the link!
* More about that when I complete the first milestones of mine...
See also: (And many others...)
 Faults in Turing Test and Lovelace Test. Introduction of Educational Test (2007)
 What's wrong with Natural Language Processing? Part 2. Static, Specific, High-level, Not-evolving... (2009)
 What's wrong with Natural Language Processing? Part I (shorter) (2009)
 Part 3: The NLP Researchers cannot understand language. Computers could. Speech recognition plateau, or What's wrong with Natural Language Processing? (2010)
 Анализ на смисъла на изречение въз основа на базата знания на действаща мислеща машина. Мисли за смисъла и изкуствената мисъл (2004)
 Човекът и мислещата машина. Анализ на възможността (...), 2001 г.
 Embodiment is just coordinate spaces, interactivity and modalities - not a mystery (2011)