Monday, March 23, 2009

// // Leave a Comment

What's wrong with Natural Language Processing? Part 2. Static, Specific, High-level, Not-evolving...

By Todor Arnaudov

Independent Researcher - Twenkid Research
Independent Filmmaker - Twenkid Studio
ASIC Engineer - (as of March 2009)

MS Software Engineering, Plovdiv University, 2008
BS Computer Science, Plovdiv University, 2007
Internship in RIILP, University of Wolverhampton, 2007



What's wrong with Natural Language Processing?
Part 2. Static, Specific, High-level and many more!



In brief: Static, Specific, High-level, "Short-chain of intelligent operations", Lack of evolution of the systems, Not enough experimental approach and research for new paradigms

Part I: http://artificial-mind.blogspot.com/2009/02/whats-wrong-with-natural-language.html


Overall, I'm disappointed by the state-of-the-art of NLP. I think the approaches are too shallow, too obvious, too static... And many more...


[Critics] Who the hell are you to be disappointed from "the state-of-the-art"? Who cares? When did you finish school?!


And I think after taking into account all of the "wrong" parts, it is not surprizing that the progress is slow.

We are walking on almost the same road.

The road does go to somewhere, but I think - not right there where me and other are aiming to. OK, call us yet "dreamers".

[Critics] Crazy mad "scientists"!

If the approach doesn't change, I can't see how the "normal" researchers would go there . The same stuff which is not going to evolution is being done over and over again...

...

In the beginning, let me note that I believe that any research or research direction, including mainstream NLP research is supposed to be a mirror, indication, derivation, reduction, output or whatever of operation of mind. This implies that there is *something right* in the paradigm, the paradigm maps some aspects of researchers' mind and is supposed to solve some problems. Science proves this "scientifically" - OK, no doubt that, what I call "mainstream NLP" does solve many problems...

However, one of the basic mistakes I see, is that the abstractions of these problems are at so high-level and so dispersed, that it is impossible to "ignite" the engine to run on its own.
Say, it is not an engine, but just a tool. A tool for another mind to "push buttons" and see the output, not an engine or an organism to be born and to develop.


*** What mainstream NLP is doing wrongly, in my opinion ***


-- Trying to do a reverse engineering of language, starting from words, text and very abstract and not based on "good physics" linguistic constructs.

[Critics] What the hell is "Good physics?"

"Good physics" is a basis that allows you to build an "engine", "ignite" it and make it running on its own. :) (Arnaudov, 2009)

I believe text is one of the reductions of the operation of mind, which is dynamic, based on multiple inputs simulation of multiple virtual universes at different levels of details/abstraction. Some of them are at very low level (say sets of images, sets of "video", sets of relations between sensory inputs). Words are pointers to very low-level models in mind.

Language, viewed as a bunch of static text, could not contain enough information to rebuild-back intelligence in the low level. Low level of mind is massively reduced and "cleaned", when converting to text, mind is reconstructing the missing part using its internal rich models.

-- The focus of the models is output - models are based too closely on text itself and on structures which are derived from the text itself and the output words.

Again - too obvious and cheap. Text is a reduction of mind in action. Language is not just a flat bunch of words with tags and a boring set of numbers for distribution and frequency.

Mind operates with images, relations and dynamics of virtual universes/systems that it simulates, and then reduces representation of this simulation into text, which actually is a system of pointers to the items and rules, which represent the real structures in mind. The structures in mind are dynamic, they are not 1 billion word corpora.

[Critics:] And how those "dynamic models" supposed to be modeled? You're stupid theorist! You don't define concepts you use! You're doing nothing!

I did define some of them, but years ago and OK, in pieces yet only in Bulgarian.
Anyway this reminds me a "practical" AI professor of mine I spoke with a year ago. He didn't make a difference between McCarthy and Minsky (who cares anyway?) and haven't even heard of Numenta or other advanced research directions, like simulation of neocortex columns...

I am a theorist, because I've been busy doing stuff for a living, besides theorizing. In fact I haven't been really theorizing since my teenage years. I'm tired of being only theorist, so be prepared!


-- Lack of freedom in researcher's imagination and lack of will to test more imaginative, complex, growing and dynamic models than obvious flat static relations between "scientifically linguistically proven" structures.

Researchers are walking on the same old "paved" road. Freedom? Imagination? Quotations rule this world - not freedom of imagination.

You want to do a radically new experiment? Get lost! If your research was not based on your supervizor's, on the best known researcher in the field so far etc. - then you're not a scientist and your research is not scientific.

Young researchers are trained that way. Fine - knowledge, history, respect, methodology, etc...
That's good. However, then they become experienced researchers, they quote their own papers, which are quoting the previous ones, which are accepted to be in the right direction.

Of course, revolutionary research is hard to be done in such conditions.

[Critics:] What revolutionary research are you talking about? You stupid dreamer! Come down and step on Earth! Learn the real NLP! Join the mainstream and you will be forgiven!

Who told you that I don't learn it?Thanks. I may take this option, but let me first try to do it differently.

-- Systems are not general, they are created to solve specific abstract problems, defined in terms of words or other very abstract concepts. That's like dealing with the symptoms, not with the cause of the "desease".

-- Models are not only specific, but static.

Machine learning, Naive bayesian etc. - they seem to be models in development. But what are they actually learning?

Probabilities between some set of "symbols" inside a set.

That's fine, but what is done with those probabilities between symbols later?
What those models want to do later with these probabilities?
Can they want to do, and do anything at all?

This is too flat. Words are pointless without doing something else with them - humans use words to make somebody do, imagine or feel something.

The purpose of "probabilities" in real natural language is to cause something different than words.

-- Lack of will and intentions in models. Lack of effectors. Lack of general feedback loops for self-improvement.

[Critics] Will and intentions? "Desire is irrelevant. They are machines!" And Computational Linguistics is not exactly Artificial Intelligence! Don't mix the fields!

Mind needs will and effectors. Otherwise it is not mind, but a mere number cruncher. And a pure number-cruncher architecture would hardly have capabilities of mind.

[Critics] Oh... Intelligent Agents. Bravo! You reinvented the wheel!

Thanks! You're so sweet!

Here we are - another weak part of mainstream research.

All that dividing of everything, instead of integration.

This division of everything is connected with the tendency of mainstream researchers to solve specific dispersed abstract problems, but not to search a solution for general problems which can solve many specific problems in an elegant way. I suggest you check out Boris Kazachenko's site.

What are the pieces and the mechanisms that can build intelligence up? The general mechanisms and evolved system will be capable to solve all anaphora-resolutions, word-sense-disambiguation, multiword expression recognition and whatever...

Let's search for an engine, not for tools.



-- Lack of continuous development and accumulation of experience. Lack of evolution.

Of course. Models are so much hand-crafted and specific, like tricks. Meet some of my unfortunate disappointments in Computational Creativity:

MEXICA: A Computer Model of Creativity in Writing - "Creativity" Disappointment again

Faults in Turing Test and Lovelace Test. Introduction of Educational Test. (Arnaudov, 2007; suggestion of educational test and analysis of works by Bringsjord, S., Ferrucci, D)


These systems (MEXICA and BRUTUS.1) may seem very good at first sight, but after you look under the hood, you will see how much they are based on word-by-word direction and how weak are they in creative generation of text.

These systems really are not "computationally creative", it is implied by the simplicity of the models.

A nice model is growing on its own by communicating with intelligent environment. You shouldn't be capable to understand it in details after it grow. If you are capable to understand the details and follow them - your model is too simple, it is too "young" or both.

If you use to code everything line by line and direct it... If you can predict everything by hand or in an obvious way... Sorry, but this is - at least - very boring!

[Critics] Theorist!

Thank you! :))


-- The following generation of researchers base their work on the work of the previous ones.

Again... Sure, this is science. It should be like that. Of course the state-of-the-art should be known. And one should use the knowledge, accumulated in the past.

However, I think the efforts spent on this is should be dosed.

Instead of imaging and testing new approaches, most of the time typical NLP researchers do study bibles with models which are proven to lead to very painful and slow progress.

Or the bibles consist of solutions which researchers are supposed to implement.


Or researchers are spending long-long time, building hand-crafted tools and databases, which cannot evolve on their own, later on.

The same path for so many years...


[Critics] Slow progress? Parsing, "Marsing", Syntax 45.4%, 67.4%, POS-Tagging: 96.4%, ...

So...? This progress doesn't lead to intelligent machines.
Those numbers do not map to a genuine general intelligence, but to production of tools.

Hand-crafted tricks with text... If you call this "Natural language processing" - OK, it's great.
This is useful to a certain degree and for particular class of problems.

Yes, mainstream NLP at the moment:

- Is useful.
- Solve some abstract specific problems by heuristics.
- It works to some degree for "intelligent" tasks, because of course language do maps mind.

However, the mainstream still does not lead to a chain of intelligent operations, there are not loops and cumulative development.


-- The lenght of the chain of inter-related intelligent operations in NLP today is very short. This is related to the lack of will and general goals of the systems. These systems are "push-the-button-and-fetch-the-result".

-- Swallowing of a huge corpus of 1 billion of words or so and a computation of statistical dependencies between tokens is not the way mind works.

!!! Mind learns step by step, modeling simpler constructs/situations/dynamics/models before reaching to more complex.
!!! Temporal relations of the input with different complexity is important.
!!! Mind usually uses many sensory inputs while learning. Very important.
!!! Mind has will, uses feedback and can actively and evolutionary test and improve correctness and effectiveness of its operation, including natural-language-related.


I suggest:

1. Hollistic approach - the goal is building an operational mind with long chain of intelligent operations, not completion of a table with values 94.55% 96.5% 90.4% and a long list with quotes in the end of a paper.

2. System must have will and effectors and must evolve. And saying "to evolve", I am not talking about "genetic algorithms", I'm talking about increasing of complexity by fetching "complexity" from the environment and pushing it into the system.

In other words:

Methodology for building very complex systems:

-- Don't do everything by hand, design something which is capable to design parts of it on its own
.

3. If doing reverse engineering - let it be reverse engineering in the beginning of mind development and reverse engineering of the evolution of mind. Not reverse engineering of text.

4. Straight Engineering. Experimental engineering. Experimenting with designs of systems which evolve and fetch complexity.


[Final Critics] Who the hell are you, crazy ignorant stupid kid? "50 years of research of the brightest, talented, etc!" And you think you will change the world! Crazy!

If you are walking on the wrong way, you can't reach the right place, even if you were "the brightests" etc. persons. They couldn't find the right way.

I think one of the important issues with NLP research is that it had been lacking persons with the appropriate combination of talents, mindset and personality to go to a different path than those 50-years old one.

It is not easy to state: "I think this is a wrong approach, let's find another one!", especially if you are young.

Most researchers accept "this is correct, because - quote prof. A, prof. B... They are from University C, which has the most publication in journals D, E and F, which are ... (Oops, there are no Nobel prizes in NLP).

Anyway - therefore, this is the best, because it is quoted there and has 89.95% in this measure, which is accepted by.... Also, the paper suggests 94.34% in the test of "interrelated multipart tagging of coverage structures" etc., so this is real!." and so on.

Or they just want to have their PhD now, and the easiest and fastest way is to fetch a topic from the mainstream and do it the way it is done - these topics are... In Bulgarian it's called "Dissertabilni" - acceptable for a PhD. But mainstream is supposed to be behind the cutting edge.


So I'll say it again:

The reason why so many researchers are doing the same research and progressing so slowly is that they do assume that the others with higher status are right, and base their "original" research too much on it. They do not imagine wildly enough.

The same trivial, not original, not really inventive research, dealing with the old obvious parameters and items, supposed to be "the right ones"....


Conclusion: The paradigm of NLP is wrong.


THE END


To be continued...


Best Regards
Todor Arnaudov


Suggested reading (google): Boris Kazachenko, Jeff Hawkins, Todor Arnaudov (български - http://eim.hit.bg/razum)

0 коментара: