Wednesday, February 4, 2009

What's wrong with Natural Language Processing?

A short philosophical essay...

Do you know about Machine Translation - either rule-based or Statistical, the Lexical Databases like WordNet? Statistical Parsers, POS-Taggers, Parallel Corpora with 1 billion words. Machine Learning, N-grams, Hidden-Markov-Models. And Blah-blah-blah...

What's wrong with NLP?

The main issue I find in the paradigm of NLP today is, I think, embedded in the mindset of the researchers in general, and in the research tradition. IMHO, usually researchers are mathematicians and too nerdy persons.

Usually researchers are mathematicians - not artists, not creative enough and not brave enough to dive into too deep imaginative directions.

Science do also pushes the typical researcher not to invent too much. If he does use his imagination too much and does create "imaginary structures", he might be unable to prove their creation and existence "scientifically" enough to the other members of the sect. He would not be acknowledged etc.

Too nerdy, too mathematical, too obvious and directly "provable" by the raw output data.

As an example of this I would mention Statistical MT.

These distributions work to a certain degree, but this is a trick. It is not really original. There are pure mathematical parameters, which are pretty obvious.

Sorry, but isn't Statistical Machine Translation a mathematical trick?
What are scientific basis of it?

Flat parameters, which can be derived by the distributions of words.

- Take a problem.
- Divide it it into "items" with which you can do something.
- Take the items which you can do something with, and find the combinations which make a difference.
- Take the items and do combinations in order to see what is the difference.

OK, research is a process of exhausting anyway.

Research is exhausting anyway, but if you do not invent structures which are outside and above the obvious ones, you cannot reach too far. However, the relations and combinations which are obvious or easily derived by the raw data without auxiliary "unreal" structures are easier to prove and to be acknowledged in the sect; pardon, I mean science.

This reminds me Quantum Mechanics and the hidden variables. I don't now about physics, but in NLP definitely hidden variables do exist.

Sorry, but in NLP there are hidden variables, out of the text.

I would mention also, that for any creative writer or poet, words are much more than distributions. Writing of a novel is an imaginative process, you are building a world with actors, with laws etc.. Then you simulate that world and record in text what happens.

Creative writing is imagination and simulation, not probability distribution of words.

Actually your dreams about the world you describe are very much more detailed, because the mind is not based on words.

And those Statistical techniques (at least that I know) are flat, because they lack imagination and will, they do not simulate worlds.

Do standard statistical techniques do simulate worlds? Don't think so.

Mind needs to build a virtual world and fit the text to the virtual world, which is simulated during understanding. And I believe that the simulated world in mind is not built by NP, VP etc. NP, VP etc. can be mapped to some aspects of the "simulator" or to modify that simulator in a particular way, but I don't think they are the simulator.

CON: There is no simulator, brain is very slow etc.

(Mostly) memory-based reconstruction is also a simulation. Those 100-level or so of neurons in one "pass" might be pretty enough, if the task is subdivided in a smart way.

Do the "simulator" in mind is built by NP, VP etc.? I don't think so.

Rather mind can fit its simulator to work with these structures if it needs/wishes.

A prove: babies do not know about words, and in early age when they know something about words, their mind is not based on words anyway. They are pretty intelligent without NP-VP etc., later on they fit NP-VP to something else. I guess so.

To be continued:

- Reflect on explanation of the reason Statistical translation give useful results. ?
- Finally start doing some kind of "simulators" yourself and stop just talking philosophycally! :))


What's wrong with Natural Language Processing? Part 2. Static, Specific, High-level, Not-evolving...

No comments :