Sunday, May 23, 2010

// // Leave a Comment

The NLP Researchers cannot understand language. Computers could. Speech recognition plateau, or What's wrong with Natural Language Processing? Part 3

Thanks to my friend O.G.I, for sharing the link below, about the plateau in Speech Recognition software.

Rest in Peas: The Unrecognized Death of Speech Recognition

If you check this out (specialized NLP training is not needed to see)

What's wrong with Natural Language Processing? Part 2. Static, Specific, High-level, Not-evolving...

And this simple generalisation: Language is a hierarchical redirection/abstraction/generalization/compression of sequences of [multi-modal] sensory inputs and motor outputs, and records and predictions for both.  (me) 

You'll get what causes the plateau - why NLP, parsing, speech recognition or whatever would stay at their dead-end forever if it doesn't change radically.

I especially enjoy this one:

"...To some, these developments are no surprise. In 1986, Terry Winograd and Fernando Flores audaciously concluded that “computers cannot understand language.” In their book, Understanding Computers and Cognition, the authors argued from biology and philosophy rather than producing a proof like Einstein’s demonstration that nothing can travel faster than light...."

So silly.  The same goes for any similar sentence of retired AI-niks, because they're actually saying this:

Computers cannot understand language [or think], because computers do exactly what they, those retired AI researchers, program them to do. Besides, machines lack free-will, also you know - Goedel incompleteness, quantum-mechanical blah-blah-blah etc. 

However, this implies that computers just execute their programmers' instructions,  therefore it is not the computers who cannot understand language, it is their incapable programmers.

It is the Programmers and old-fashioned AI-niks doing NLP who cannot understand language, not the computers.

NLP programs are playing with words in dictionaries, while mind is playing with multi-modal pre-processed raw sensory inputs. An AGI is needed to make speech recognition correct, this is what should be worked on.

"...So not everyone agreed. Bill Gates described it as “a complete horseshit book” shortly after it appeared, but acknowledged that “it has to be read,” a wise amendment given the balance of evidence from the last quarter century."

Hmmm, Bill is cool!  :)

0 коментара: