Tuesday, December 18, 2018

// // Leave a Comment

Human-centered AI by Stanford University - 8 years after Todor's Interdisciplinary Course in AGI in Plovdiv 2010

See: https://hai.stanford.edu/

Introducing the initiative - Oct 19, 2018:

"But guiding the future of AI requires expertise far beyond engineering. In fact, the development of Human-Centered AI will draw on nearly every intellectual domain"

The world first interdisciplinary course in AGI in Plovdiv University started in April 2010 and was proposed as an idea to my Alma Mater in December 2009.

Among the core messages of the course were the importance of the interdisciplinarity/multidisciplinarity and the suggested leadership in the research by such persons. I've been a proponent of that approach in my writings and discussions since my teenage years, being a "Renaissance person" myself.

See also the interview with me, published in December 2009 in the popular science magazine "Obekty"* after I have given a lecture on the Principles of AGI to general public in Technical University, Sofia for the European "Researchers's Night" festival.

          (...)
- Where do the researchers' efforts should be focused in order to achieve Artificial General Intelligence (AGI)? 
First of all, research should be lead by interdisciplinary scientistswho are seeing the big pictureYou need to have a grasp of Cognitive Science, Neuroscience, Mathematics, Computer Science, Philosophy etc. Also, creation of an AGI is not just a scientific task, this is an enormous engineering enterprise – from the beginning you should think of the global architecture and for universal methods at low-level which would lead to accumulation of intelligence during the operation of the system. Neuroscience gives us some clues, neocortex is “the star” in this field. For example, it's known that the neurons are arranged in sort of unified modules – cortical columns. They are built by 6 layers of neurons, different layers have some specific types of neurons. All the neurons in one column are tightly connected vertically, between layers, and are processing a piece of sensory information together, as a whole. All types of sensory information – visual, auditory, touch etc. is processed by the interaction between unified modules, which are often called “the building blocks of intelligence”.  
- If you believe that it's possible for us to build an AGI, why we didn't manage to do it yet? What are the obstacles? 
I believe that the biggest obstacle today is time. There are different forecasts, 10-20-50 years to enhance and specify current theoretical models before they actually run, or before computers get fast and powerful enough. I am an optimist that we can go there in less than 10 years, at least to basic models, and I'm sure that once we understand how to make it, the available computing power would be enough. One of the big obstacles in the past maybe was the research direction – top-down instead of bottom-up, but this was inevitable due to the limited computing power. For example, Natural Language Processing is about language modeling; language is a reduced end result of so many different and complex cognitive processes. NLP is starting from the reduced end result, and is aiming to get back to the cognitive processes. However, the text, the output of language, does not contain all the information that the thought that created the text contains.
On the other hand, many Strong AI researchers now are sharing the position that a “Seed AI” should be designed, that is a system that processes the most basic sensory inputs – vision, audition etc. Seed AI is supposed to build and rebuild ever more complex internal representations, models of the world (actually, models of its perceptions, feelings and its own desires and needs). Eventually, these models should evolve to models of its own language, or models of human's natural language. Another shared principle is that intelligence is the ability to predict future perceptions, based on the experience (you have probably heard of Bayesian Inference and Hidden Markov Models), and that intelligence development is improvement of the scope and precision of its predictions.
Also, in order the effect of evolution and self-improvement to be created, and to avoid intractable combinatorial explosion, the predictions should be hierarchical. The predictions in an upper level are based on sequences of predictions (models) from the lower level. Similar structure is seen in living organisms – atoms, molecules, cellular organelles, cells, tissues, organs, systems, organism. The evolution and intelligence are testing which elements are working (predicting) correctly. Elements that appeared to work/to predict are fixed, they are kept in the genotype/memory, and are then used as building blocks of more complex models at a higher level of the hierarchy.
         (...)

* The original interview was in Bulgarian

As the colleagues at Stanford enumerate: their University was the place where the term AI was coined by McCarthy, where computer vision was pioneered (the Cart mobile robot; Hans Moravec), self-driving cars won DARPA Grand challenge in 2005, ImageNet, [Coursera], ... They are located in the heart of the Sillicon Valley, employ a zillion of the best students and researchers in CS, NLP, EE, AI, Neuroscience, WhatEver.

The Plovdiv course was created practically for free with no specific funding, just a regular symbolic honorarium for the presentation.

Note also that the course was written and presented in Bulgarian.

See also:
Saturday, February 24, 2018
MIT creates a course in AGI - eight years after Todor Arnaudov at Plovdiv University

...

The paradox is not so surprising, though, since most people and the culture are made for narrow specialists, both in Academia and everywhere. The "division of labor" etc. British and US wisdoms for higher profits in the rat race.

Thanks to prof. D.Mekerov, H.Krushkov, M.Manev who respected my versatility and especially to M.Manev who was in charge to accept the proposal of the course.


PS. There are other proponents of interdisciplinary and multidisciplinary research as well. I recall Gary Marcus from the popular AI journalists about; and of course as early as Norbert Wiener, if I'm not mistaken he explicitly suggested that. (The German philosophers such as Kant and Schopenhauer - as well...)

See my comment of a comment of Gary Marcus regarding Kurzweil's book:

Wednesday, January 23, 2013

0 коментара: