Thursday, January 4, 2018

// // Leave a Comment

The lack of operational hierarchical structure in the Deep Learning ANN neural networks

A survey paper on the issues of Deep Learning by Gray Marcus:

The author has a valuable mix of expertise both in the ANN development and in linguistics, developmental psychology, cognitive psychology.

There are good points on the lack of real hierarchical structure in (current/regular) DL/ANN, accenting that they are actually "flat", even though there are "layers" which gives a confusing impression.

"To a linguist like Noam Chomsky, the troubles Jia and Liang documented would be
unsurprising. Fundamentally, most current deep-learning based language models
represent sentences as mere sequences of words, whereas Chomsky has long argued that
language has a hierarchical structure, in which larger structures are recursively
constructed out of smaller components"

G.Marcus p.9

See also Jeff Hawkins point since 2004 "On Intelligence"/Numenta, Dileep George's Vicarious, Boris Kazachenko's "Cognitive Algorithm"; the old Hierarchical Markov Models; probably many other researchers, also myself since my early 2000s writings where even I as a teenager have realized that human general intelligence faculty is a Hierarchical simulator and predictor of virtual universes.

The ANNs (without being put in another system/organization) lack operational structure.

Good survey and discussion of areas where DL fails and emphasis of the lack of transfer of learning, i.e. that the networks are not general intelligence and don't "understand" the concepts (the "overattribution" for DeepMind's Atari-player discovery of "tunnels", see p.9)

* However I don't like the pretentiousness in some parts of the article while discussing trivialities and proposing alternatives with 15+ years(?) delay with a pinch of academic glamour or so.

E.g. unsupervised learning (not that common boring classification of fixed images) and self-organization, incremental complexity/"self-improvement" - "Seed AI"; hierarchical operational structure, "symbol grounding" - emergence of generalizaions/"symbols", "abstract thought" from the sensory processing; different levels of abstraction - including "symbolic"; causality understanding (prediction, simulation of "virtual universes"); general/universal game playing; application of general educational tests/measures... (Since AGI is about that; the term "human level (general) (artificial) intelligence" was used in the past) etc. (Not just "pattern matching" of synthetic static tests.)

The above is what AI was always supposed to be about - AGI, - at least as some talented teenagers and others realized and shouted it to the world in the early 2000s and dismissed the poisoned term "AI". Everything was called "AI" back then - somewhat similar today, AI is ubiquitous, yet not general and lacking a personal wholeness.

These suggestions and conclusions would be informative for hard-core AI-er, though (programmers-mathematicians type), it seems the "general"-... part has still a way to go as a concept for the "mainstream" developers community with its "Narrow AI" attitudes**.


** Narrow AI - another forgotten term, which on second thought is still actual. Current DL is in fact "narrow AI", each network is trained for a specific class of problems ("classification") and as well explained in the paper can't generalize concepts and transfer the knowledge to different domains.

*** I "don't like" my own pretentiousness, too, but I consider it funny and ironic, rather than serious like in the paper. :P

**** Thanks to Preslav Nakov for sharing the link!


Compare the educational test proposals with one of the first articles in this blog, a decade ago:

Wednesday, November 14, 2007
Faults in Turing Test and Lovelace Test. Introduction of Educational Test.

I didn't explicitly defined the exact kinds of tests, because they were already given in details in the appropriate textbooks about the set of the respective expected skills and knowledge for the respective age or educational level.


The article reminds me of the series of articles "What's wrong with Natural language processing?", starting from the year 2009:


Vicarious' demo video summarizing ANN reinforcement learning faults, and their Schema Networks:

0 коментара: