Thursday, April 18, 2019

Collection of Best papers in Computer Science conferences since 1996

An interesting historical resource and one for getting general insights on the trends - good job for the authors of the list. I notice that it's missing the AGI conference though, if it counts as CS.

https://jeffhuang.com/best_paper_awards.html?fbclid=IwAR0Yc_0K689o8OhmfB0oUI5VKcWg6bg3MuG95yHgcyIc8TU5ItkMw4IzKj4

Thanks to the source: https://www.facebook.com/groups/MontrealAI/permalink/631419193986583/

Sunday, April 14, 2019

Neural Networks are Also Symbolic - Conceptual Confusions in ANN-Symbolic Terminology

Todor's remark to:

G.H. - On the Nature of Intelligence  on Creative Destruction Lab
Публикуван на 1.11.2018 г.
This talk is from the Creative Destruction Lab's fourth annual conference, "Machine Learning and the Market for Intelligence", hosted at the University of Toronto's Rotman School of Management on October 23, 2018.
https://youtu.be/MhvfhKnEIqM

The Hinton's message that there should be processing at different time scales is correct, but yet known and obvious. In my "school" it's about working at different resolutions of perception and causation-control, it is also in Hawkins's, in Friston's, in Kazachenko's CogAlg.

I'd challenge the introduction in a funny way though, although I realize that the context is in fact of a silly pitching session and it is perhaps with a clear purpose: note the title "Market for Intelligence" and "Management". The message is "don't fund  symbolic methods, fund mine". 

The division symbolic and "vector" seems sharp, but semantically it's defined on confused grounds and confused concepts


With "symbolic" they apparently mean some kind of "simple" ("too obvious" or "fragile") logical programming like "Prolog" or something like that and something with some kind of "short code" or "obvious" and "simple" relations, lacking hidden layers, or shallow etc. that "doesn't work".

While with "non-symbolic" they address something with "a lot" of numerical parameters and calculations which are not obvious and also, likely, are not understood - by the designers as well. Ad-hoc mess for example, and the job of  "understanding" is done by the data themselves and the algorithm.

That doesn't make them "not symbolic" though, even in that messy state they are

Let's investigate one Supervised Tensorflow Convolutional NN or whatever high performance library.

The user Tensorflow code is developed in Python (symbolic formal language).

The core is developed in C/C++?, CUDA/GPU (C/C++) - pretty symbolic, abstract and considered "hard".

Data is represented as numbers (sorry, they are also symbols, reality is turned into numbers by machines and by our mind).

The final classification layer consists of set of artificial labels - symbols - which lack internal conceptual structure, except for the users - humans, - who at that level operate with "symbols" - abstract classes.

The Mathematical representations of the NN are of course also "symbolic". The automatic differentiation, gradient descent, dot product - these are "symbolic", they rely on "symbolic" abstract language and namely mathematical symbols to express it (at the level of representation of the developers ).

Any developed intelligence, known to us, eventually reaches to symbolic hierarchical representations, not just to a mess of numbers. There's some kind of classification, ranking and digitized/discrete sampling, required to produce distinguished "patterns" and to take definite decisions.

The NN are actually also not just a mess of numbers - there's a strict structure of the way which filter is computed at which layer, what is multiplied by what etc.

"Vectors" are mentioned here. Reality and brain is not a vector. We could represent images, models of reality, using our abstractions of vectors, matrices etc., and if we put them in appropriate machinery etc., it could then reproduce some image/appearance/behavior etc. of reality.

However brain and neurons are not vectors.

Also when talking about "symbols" - let's first define them precisely.

Not only the simplest classical/Boolean logic of "IF A THEN B" is "symbolic"...

What is not "symbolic" in Neural Networks is the raw input, such as images, while the input to "classical" symbolic AI algorithms like for logical inference in PROLOG or simple NLP using "manual" rules is "symbolic" - text, not representing full images with dense spatio-temporal correlations etc.

This however doesn't imply that the input can't produce "symbolic" incremental intermediate patterns by clustering etc. (Where "symbolic" is say, an address of an element among a class of possible higher level patterns within given sensory space etc., like in classification - e.g. simple geometric figures such as recognition of a small blob, line, angle, triangle, square, rectangle etc.)


* Other more suggestive distinctions

- Sensori-motor grounded and ungrounded cognition/processing/generalization.
- Embodied cognition vs purely abstract ungrounded cognition etc. Al
- Distributed representation vs Fragile highly localized dictionary representation

"Connectionism" is popular, but a "symbolic" (a more interpretable one) can be based on "connections", traversing graphs,  calculations over "layers" etc. and is supposed to be like that - different types of "deep learning".

The introduction of Boris Kazachenko's AGI Cognitive Algorithm emphasizes that the algorithm is "sub-statistical", a non-neuromorphic deep learning, comparison first, and should start from raw sensori data - the symbolic data should comes next. However this is again about the input data.

Its code forms hierarchical patterns having definite traversable and meaningful structures - patterns - with definite variables, which refer to concepts such as corresponding match, gradient, angle, difference, overlap, redundancy, predictive value, deviation to template etc. to real input or to lower or higher level patterns. To me these are "symbols" as well, thus the algorithm is symbolic (as any coded algorithm), while it's input is sub-symbolic, as is required for a sensori-motor grounded AGI algorithm.

See also XAI - explainable, interpretable AI which is aimed at making the NN "more symbolic" and to bridge them. The Swiss DeepCode startup explain their success in the combination of "non-symbolic" Deep Learning and programming-language-like technologies for analysis such as parsing etc. i.e. clearly "symbolic" structures.


Saturday, April 13, 2019

DeepCode's Martin Vechev's recent interview

An interview from March 2019 on code synthesis, automatic code reviews and suggestions for improvement etc. and the product they already offer:

https://sifted.eu/articles/ai-is-coming-for-your-coding-job/

Demo for tensorflow suggestions

As of Vechev's claims in the end about what is hard to be automated (maybe in order not to offend the developers too much), and the explanations that sophisticated software such as Word Processors, 25 or 45 million lines of code etc. are not supposed to be coded automatically in another related article in French :"Computer programmers are approaching their end"https://www.lesechos.fr/tech-medias/intelligence-artificielle/informatique-les-codeurs-programment-ils-leur-fin-239772

I challenge some of the claims of difficulty of automation, it could be done with focus and clever meta-design and mapping to sensori-motor spaces and would work incrementally even without NeuralNets and brute force-like search over "everything ever written".

Monday, February 25, 2019

Reinforcement Learning Study Materials

Selected resources (to be extended)

https://sites.google.com/view/deep-rl-bootcamp/lectures

The Promise of Hierarchical Reinforcement Learning

+ 15.3.2019

Saturday, February 23, 2019

Учене с подкрепление/подсилване/утвърждение - езикови бележки | Terminology Remarks on Reinforcement Learning Tutorial in Bulgarian

Чудесни практически уроци за начинаещи от С.Пенков, Д.Ангелов и Й.Христов в Дев.бг:

Част I

Част II

Някои терминологични бележки от мен, ДЗБЕ/ДРУБЕ 

"Транзиционна функция" - функция на преходите, таблица на преходите.

В лекциите от курса "Универсален изкуствен разум" ползвах "подсилване" (може и подкрепление) и подчертавах произхода на термина от поведенческата психология ("бихевиоризъм").

Има връзка и с невропсихологията (действието на допамина) - награждаващите поведения се подкрепят/подсилват (и "затвърждават") и затова в последствие се изпълняват "с по-голяма вероятност". Тези, които не носят полза, не се подкрепят.

В друга литература обаче също ползват "утвърждение".

На езиково близкия руски също е "подкрепление".

http://research.twenkid.com/agi/2010/Reinforcement_Learning_Anatomy_of_human_behaviour_22_4_2010.pdf

https://en.wikipedia.org/wiki/Reinforcement  (виж други езици)

http://research.twenkid.com/agi/2010/Intelligence_by_Marcus_Hutter_Agent_14_5_2010.pdf

2. Част:

(...)

Имплицитен - неявен;
Произволни - случайни

(За числата, Random така се води на бълг. в статистиката; произволен достъп - за памет, но там има "съзнателна воля", избор)

Семплиране, семпъл - отчитане, отчет;

Терминално - крайно, заключително;

Еквивалентно - равнозначно, равносилно.

Мисля че за "пристрастно" зарче си има термин, но не се сещам.

Стилистична бележка - "Ако е," не звучи добре (предполагам превод от If it is such/so); може да се повтори условието, за да е най-ясно, или "Ако е така" или да се преработи предното изречение.

Корекции, които забелязах: "фунцкия" и "теоритично" (теорЕтично).

И накрая: би било полезно, ако има таблица или отделна страница с термините, с различни варианти на преводите, когато има разногласия.

Илюстрации на Монте Карло и постепенното получаване на по-точни резултати - все по-малко шум и по-ясно изображение:

https://www.shadertoy.com/results?query=+monte+carlo

https://www.shadertoy.com/results?query=global+illumination


Wednesday, February 20, 2019

Спомени за българската високотехнологична индустрия от фото архивите | Memories from the Bulgarian High-Tech Industry

Снимки от производството и настройката на компютри, електроника и други машини и техника от българския държавен архив от 50-те до 80-те години.

From the Bulgarian State Archives, from the 50-ies to the 80-ies.

http://www.archives.government.bg/bgphoto/004.08..pdf?fbclid=IwAR0WWNWCCaa41ZXVUSu8himrnqAC_fmIZ25REJ8OHP4DAlD_THpkNzTDWpw

Thanks to D. from the Compu  computer museum.

Tuesday, February 12, 2019

On the paper: Hierarchical Active Inference: A Theory of Motivated Control and Conceptual Matches to Todor's Theory of Universe and Mind

Comment on:

From "Trends in Cognitive Scinece", vol.22, April 2018.
An opinion article 
Hierarchical Active Inference: A Theory of Motivated Control

Giovanni Pezzulo, Francesco Rigoli, Karl J.Friston
https://doi.org/10.1016/j.tics.2018.01.009

It's an excellent paper, would be insightful and accessible for beginners in AGI, psychologists and for seeing the big picture like "On intelligence" etc. and for readers who like divergent thinking and seeing mappings to real agent behavior and "macro" phenomenons. Good questions, huge set of references, mapping to brain areas and nueroscience research.

However as of architectural, philosophical ideas it sounds too similar to my own"Theory of Universe and Mind", published in articles mainly in the Bulgarian avant-garde e-zine "Sacred Computer" (Свещеният сметач) between late 2001 and early 2004. Its ideas were presented/suggested also in the world's first University course in AGI in 2010 and 2011.

Thanks to Eray Ozkural who was familiar with Friston's work and we had an interesting discussion in Montreal.AI FB's page, see a recent post regarding his work in "Ultimate AI" and "AI Unification".

The term "active inference" sounds pretentious, it means using a model of the world in order to act, I assume in opposite to being simply reactive as in simpler RL models. However IMO that's supposed to be obvious, see below.

Theory of Universe and Mind

The terminology and content of that 2018 "opinion" paper strongly reminded me of the teenage writings of myself from the early 2000s. The term "control" (the cybernetics influence), the need of prediction/reward computation at different time scales/time steps, cascade increment of the precision (resolution of control and resolution in perception); specific examples of "behaviorintrospective" analysis and specific selection of the actions etc.

"Theory of Universe and Mind", or "my theory", started with the hierarchy and "active inference" as obvious requirements (not only to me, I believe).

Mind is a hierarchical simulator of virtual universes, it makes predictions - controls ("cause" is a better term, though) at the lowest level. The hierarchical simulations are built from the patterns in the experience. Highest levels are built of sequences of selected patterns at lower level ("instructions", correlations) which are predictive.

At the lowest level all combinations of instructions are legal, the system shouldn't hang.

However at the higher levels, only selected ones work, not all combinations of low level instructions are correct which makes the search cheaper. That implies reduction of the possible legal states, which as far as I understand in F.'s terms is called "reduction of the free energy". 

So the mind, the hierarchy of virtual universes, makes predictions about the future in the Universe - as perceived at the lowest level virtual universe - and causes desired changes in the input, by aiming at maximizing the desired match. 

Through time it aims at increasing the resolution of perception and causality-control while increasing also the range of prediction and causation. That's what a human does in her own personal development, as well as what the humanity's "super mind", the edge of science and technology.

My old writings were also explicit about the predictions at different time-scales, precisions and domains - for a sophisticated mind there's no one single "best" objective behavioral trajectory, there are many, because there are contradictory reward-domains (like not eating chocolate, because it may lead to dental cavities, or eating it, because it's a pleasure now).  There's also a prediction horizon, uncertainty.

In the domain of Reinforcement learning, there are two types of reward, called "cognitive" and "physical". Cognitive is about making correct predictions, that is "curiosity", exploration etc., while physical is about having the desired input in the senses, implying a desired state - or "pleasure".

There must be accord between these domains and a sophisticated enough hierarchy and various time-space-precision ranges, otherwise the system would fall into a vicious cycle and have an "addiction".

In the paper, they have called my cognitive reward/drive "cold domain" (choice probability, plans action sequences, policies) and my "physical" one - "hot domain" (homeostasis, incentive values, rewards).

Etc.

...


The "Theory of Universe and Mind" works and the 2010's slides could be found in this blog, in the online archives of "Sacred Computer" (Свещеният сметач - the original texts in Bulgarian), and on twenkid.com, http://research.twenkid.com/agi/2010/ 

US Government Officially Declares AI as a Priority

In order to maintain the "economic and national security":

https://www.whitehouse.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/?fbclid=IwAR33oWgvHXnzmfDrLLx3Qg01KmpLyyPQ8Oy2hZYLu4VK1PRPQSnaC6J2AMU

Monday, February 11, 2019

Saturday, February 9, 2019

Origin of the Term AGI - Artificial General Intelligence

The fellow AGI researcher and developer Peter Voss told the story in his blog in Feb 2017:

What is AGI
https://medium.com/intuitionmachine/what-is-agi-99cdb671c88e?fbclid=IwAR1rm684qRx-oMZeyKWvioJDbhZT7YvVIp7HSZYpJAjMDsxN5lyow9T5XYM

"Just after year 2000" Voss and a few other researchers realized that there were hardware and scientific prerequisites to return to the original goal of AI. At the time they found about "a dozen" other researchers who were involved with research in that direction. Peter, Ben Goertzel and Shane Legg selected the term "A-General-I", which was made official in a 2007 book written together with the other authors: "Artificial General Intelligence".

https://www.springer.com/gb/book/9783540237334

(According to the info on Amazon, published Feb 2007)

I've encountered Ben Goertzel's thoughts about that early 2000s official embarking of the AGI as an idea again by himself and his friends and that the term was coining in the early 2000s, I started to use the term after influence by his circle. I haven't read the book, though.

I had a related terminology story also from the early 2000s, which I may tell in another post.

Other interesting AGI-related articles from Peter's blog:

https://medium.com/intuitionmachine/agi-checklist-30297a4f5c1f

https://medium.com/intuitionmachine/cognitive-architectures-ea18127a4d1d