Monday, February 25, 2019

// // Leave a Comment

Reinforcement Learning Study Materials

Selected resources (to be extended)

The Promise of Hierarchical Reinforcement Learning

+ 15.3.2019
Read More

Saturday, February 23, 2019

// // Leave a Comment

Учене с подкрепление/подсилване/утвърждение - езикови бележки | Terminology Remarks on Reinforcement Learning Tutorial in Bulgarian

Чудесни практически уроци за начинаещи от С.Пенков, Д.Ангелов и Й.Христов в Дев.бг:

Част I

Част II

Някои терминологични бележки от мен, ДЗБЕ/ДРУБЕ 

"Транзиционна функция" - функция на преходите, таблица на преходите.

В лекциите от курса "Универсален изкуствен разум" ползвах "подсилване" (може и подкрепление) и подчертавах произхода на термина от поведенческата психология ("бихевиоризъм").

Има връзка и с невропсихологията (действието на допамина) - награждаващите поведения се подкрепят/подсилват (и "затвърждават") и затова в последствие се изпълняват "с по-голяма вероятност". Тези, които не носят полза, не се подкрепят.

В друга литература обаче също ползват "утвърждение".

На езиково близкия руски също е "подкрепление".  (виж други езици)

2. Част:


Имплицитен - неявен;
Произволни - случайни

(За числата, Random така се води на бълг. в статистиката; произволен достъп - за памет, но там има "съзнателна воля", избор)

Семплиране, семпъл - отчитане, отчет;

Терминално - крайно, заключително;

Еквивалентно - равнозначно, равносилно.

Мисля че за "пристрастно" зарче си има термин, но не се сещам.

Стилистична бележка - "Ако е," не звучи добре (предполагам превод от If it is such/so); може да се повтори условието, за да е най-ясно, или "Ако е така" или да се преработи предното изречение.

Корекции, които забелязах: "фунцкия" и "теоритично" (теорЕтично).

И накрая: би било полезно, ако има таблица или отделна страница с термините, с различни варианти на преводите, когато има разногласия.

Илюстрации на Монте Карло и постепенното получаване на по-точни резултати - все по-малко шум и по-ясно изображение:

Read More

Wednesday, February 20, 2019

// // Leave a Comment

Спомени за българската високотехнологична индустрия от фото архивите | Memories from the Bulgarian High-Tech Industry

Снимки от производството и настройката на компютри, електроника и други машини и техника от българския държавен архив от 50-те до 80-те години.

From the Bulgarian State Archives, from the 50-ies to the 80-ies.

Thanks to D. from the Compu  computer museum.

Read More

Tuesday, February 12, 2019

// // Leave a Comment

On the paper: Hierarchical Active Inference: A Theory of Motivated Control and Conceptual Matches to Todor's Theory of Universe and Mind

Comment on:

From "Trends in Cognitive Scinece", vol.22, April 2018.
An opinion article 
Hierarchical Active Inference: A Theory of Motivated Control

Giovanni Pezzulo, Francesco Rigoli, Karl J.Friston

It's an excellent paper, would be insightful and accessible for beginners in AGI, psychologists and for seeing the big picture like "On intelligence" etc. and for readers who like divergent thinking and seeing mappings to real agent behavior and "macro" phenomenons. Good questions, huge set of references, mapping to brain areas and nueroscience research.

However as of architectural, philosophical ideas it sounds too similar to my own"Theory of Universe and Mind", published in articles mainly in the Bulgarian avant-garde e-zine "Sacred Computer" (Свещеният сметач) between late 2001 and early 2004. Its ideas were presented/suggested also in the world's first University course in AGI in 2010 and 2011.

Thanks to Eray Ozkural who was familiar with Friston's work and we had an interesting discussion in Montreal.AI FB's page, see a recent post regarding his work in "Ultimate AI" and "AI Unification".

The term "active inference" sounds pretentious, it means using a model of the world in order to act, I assume in opposite to being simply reactive as in simpler RL models. However IMO that's supposed to be obvious, see below.

Theory of Universe and Mind

The terminology and content of that 2018 "opinion" paper strongly reminded me of the teenage writings of myself from the early 2000s. The term "control" (the cybernetics influence), the need of prediction/reward computation at different time scales/time steps, cascade increment of the precision (resolution of control and resolution in perception); specific examples of "behaviorintrospective" analysis and specific selection of the actions etc.

"Theory of Universe and Mind", or "my theory", started with the hierarchy and "active inference" as obvious requirements (not only to me, I believe).

Mind is a hierarchical simulator of virtual universes, it makes predictions - controls ("cause" is a better term, though) at the lowest level. The hierarchical simulations are built from the patterns in the experience. Highest levels are built of sequences of selected patterns at lower level ("instructions", correlations) which are predictive.

At the lowest level all combinations of instructions are legal, the system shouldn't hang.

However at the higher levels, only selected ones work, not all combinations of low level instructions are correct which makes the search cheaper. That implies reduction of the possible legal states, which as far as I understand in F.'s terms is called "reduction of the free energy". 

So the mind, the hierarchy of virtual universes, makes predictions about the future in the Universe - as perceived at the lowest level virtual universe - and causes desired changes in the input, by aiming at maximizing the desired match. 

Through time it aims at increasing the resolution of perception and causality-control while increasing also the range of prediction and causation. That's what a human does in her own personal development, as well as what the humanity's "super mind", the edge of science and technology.

My old writings were also explicit about the predictions at different time-scales, precisions and domains - for a sophisticated mind there's no one single "best" objective behavioral trajectory, there are many, because there are contradictory reward-domains (like not eating chocolate, because it may lead to dental cavities, or eating it, because it's a pleasure now).  There's also a prediction horizon, uncertainty.

In the domain of Reinforcement learning, there are two types of reward, called "cognitive" and "physical". Cognitive is about making correct predictions, that is "curiosity", exploration etc., while physical is about having the desired input in the senses, implying a desired state - or "pleasure".

There must be accord between these domains and a sophisticated enough hierarchy and various time-space-precision ranges, otherwise the system would fall into a vicious cycle and have an "addiction".

In the paper, they have called my cognitive reward/drive "cold domain" (choice probability, plans action sequences, policies) and my "physical" one - "hot domain" (homeostasis, incentive values, rewards).



The "Theory of Universe and Mind" works and the 2010's slides could be found in this blog, in the online archives of "Sacred Computer" (Свещеният сметач - the original texts in Bulgarian), and on, 

Read More
// // Leave a Comment

US Government Officially Declares AI as a Priority

Read More

Monday, February 11, 2019

// // Leave a Comment

DARPA's Common Sense Reasoning Program

Bottom-up and top-down reasoning should meet in the middle.

DARPA and others seem to have realized that, as well as the importance of Developmental Psychology.

Redirected from:

See a table with AGI/AI startups, key people and directions:


Interesting AI news digest:
Read More

Saturday, February 9, 2019

// // Leave a Comment

Origin of the Term AGI - Artificial General Intelligence

The fellow AGI researcher and developer Peter Voss told the story in his blog in Feb 2017:

What is AGI

"Just after year 2000" Voss and a few other researchers realized that there were hardware and scientific prerequisites to return to the original goal of AI. At the time they found about "a dozen" other researchers who were involved with research in that direction. Peter, Ben Goertzel and Shane Legg selected the term "A-General-I", which was made official in a 2007 book written together with the other authors: "Artificial General Intelligence".

(According to the info on Amazon, published Feb 2007)

I've encountered Ben Goertzel's thoughts about that early 2000s official embarking of the AGI as an idea again by himself and his friends and that the term was coining in the early 2000s, I started to use the term after influence by his circle. I haven't read the book, though.

I had a related terminology story also from the early 2000s, which I may tell in another post.

Other interesting AGI-related articles from Peter's blog:

Read More