Sunday, July 7, 2019

MIT's Interdisciplinary Billion Dollar Computing College - 9-10 years after the interdisciplinary program of Todor at Plovdiv University Etc.

Comments regarding a recent talk from Lex Fridman's AI podcast with Jeff Hawkins from Numenta:

Conceptually it seems Hawkins'  approach and ideas still match many of the insights and direction in my "Theory of Universe and Mind"  works from the early 2000s (published before "On Intelligence" in "Sacred Computer" - "Свещеният сметач") and afterwards.

In addition to:  not following the mainstream research which is doing minimal changes which lead  to minimal progress of some benchmarks which is assumed good enough by the mainstream researchers, published in journals and conferences etc. Rather radical jumps are needed and recently "even the godfathers in the field agreed"...

The building of deep structures, the play of resolution of perception and control, coordinate spaces as a basis for AGI  ("reference frames"), attention traversal of different scales ("time scales"  - resolution of perception and causation within the time dimension), introspection as a legitimate method for AGI research, these are my "behavintrospective" studies; that there is no separate training and inference stage as in current NN, it's supposed to be part of one process. See CogAlg.

One difference though - he dismisses the interdisciplinary research as helpful (although I think they actually do such research). "Human-centered AI" is disliked, because it suggests study of emotions and other humane traits which are not needed for the AI, "let's just study the brain etc.".

IMO the interdisciplinary minds see and understand easier shortcuts while others could eventually find these paths by laborious digging and wandering in seas of empirical data and brute-force search.

~ 1:25 h

"The new steps should be orthogonal ..."  (no little changes "1.1% progress" on standard benchmarks - see Todor's "What's wrong with Natural Language Processing" series)


The Billion dollar computing college of MIT, interdisciplinary, from this fall:

Well: 9-10 years after the course/research direction program that I announced in late 2009, presented in spring 2010 at Plovdiv University, with practically zero funding for the creator, doing it by bare hands. We could already have practical AGI 5 years ago, given right leadership. I believe practical AGI breakthroughs could already be there in the late 2000s if the interdisciplinary and radical researchers were funded and could effectively focus on their "craft" since the early 2000s. I can't prove it practically yet, but IMO computing power is not the problem at least for the last 10 years if not more (for super computers). 
Current neural networks are very inefficient and not fine grained, also AGI as a principle as seen from my school is scalable and incremental "Seed AI" (Зародиш на разум). It should develop as an AGI also with very low resolution input, as humans can become intelligent in our measures even with poor or no vision and even deaf-blind, and the crucial manifestation of intelligence is language acquisition, coding and encoding, concept formation, not high resolution vision, playing catch with dogs etc. 
Others claim they have the algorithm (NN related), but need 1 PFLOPS for a human-level AGI (see "Ultimate AI" post in the blog). However it was not specified for how long the system would have to be trained and also, if "human level" is assumed like equivalent of 60+ fps vision at ~ FullHD or 4K with say 3D-reconstruction and object recognition and this-and-that rates, then a scaled-down version should also run at respectively lower specs, so this is not a substantial excuse.

No comments :