Friday, June 29, 2018

Program Synthesis, Self-Programming, DeepCode - CodeGen - Synthesis of Everything - Казбород

Bulgarian research group in Code Synthesis in Switzerland

Do you know about a trendy research group in Code Synthesis that runs in Zurich, Switzerland? Its leaders are several Bulgarian researchers from ETH with a spin-off start-up called "DeepCode". They use "Big Code", such as GitHub and neural networks, however combined with other methods.

A recent article, an interview witn Martin Vechev, one of the leaders of the DeepCode

(Mein Deutsch ist nicht so gut, ebenfalls, aber... Google Translate from German to English is good enough. :)  )

Older Notes from 5.2.2018 :

A recent University lecture course on Reliable and Interpretable Artificial Intelligence... (Fall 2017) ... taught in ETH Zurich by several fellow bulgarian researchers:

Deep Code, Code Synthesis, AI

Thanks to Ivan Dzheferov for the links!


Todor and Code Synthesis

For the record, I've been in the Code Synthesis domain, too, but it "doesn't count yet" publicly, because it was not in an academic environment and style, it was part of my general AGI notes and research activities and I haven't published it yet, neither completed it to something really practical.

Note, however, that when it becomes practical as I foresee it, it would "explode", because my imagination aims at "AGI-Complete" code synthesis which is integrated with the "general AGI". So far the ideas were not based on DNN, millions-lines of code style, and were more "conceptual", aiming at hitting a lot of targets with a few bullets. :) For example - by writing a little code and letting it find and write the rest itself.

However I've switched focus. A friend and a developer, Ivan, notified me about that research group some time ago - because he knew my "radical" claims, maybe since 4-5 years*, that I believed that the software development had no future as a domain and profession.

Software should be developed (generated) automatically and certainly not the way it was done either then or now.

It was quite a claim to express in front of developers, so I avoided to do it in public :D, with a few exceptions, since I expected to be ridiculed by people who had no clue. They believed  that "AI"* was a "science fiction", computers "can't do that". "AI" - because even today the "AGI" is not a popular term.
In an interview with Martin Vechev for a Bulgarian media, he mentions AGI at the end as a distant goal with his Bulgarian translation ("Всеобщ изкуствен интелект"; моят термин беше "Универсален изкуствен разум" и др. варианти, както и само "Изкуствен разум" - за да се разграничи от опорочения "интелект").

I'd better focus back. :))

[Article trimmed ... To be continued...]

Wednesday, June 27, 2018

"Delayed gratification" is an ILL-Defined Concept

The article claims that today's children have shown more self control.

I question the abstract settings of that kind of experiments, the generality and the direction of the conclusions.


The setting is ill-defined, the concept of "delayed gratification" itself, too, especially for little children. Too general conclusions. The experimenters assume that the children believe that two candies later are "a bigger gratification" than one. Why should the child do? Why not a child be satisfied with just ONE now and then go do something else, seek for another, different, more meaningful and interesting gratification - such as play? If it's satisfied anyway, why should her wait?

"A higher salary is better" - is it, at what other cost; are the money (or the number of candies) the only unquestionable "gratification" measure.

What if the children didn't understand the question like the experimenter has defined it? What if they waited more, because they were more suggestive and obedient to do what they were told. (It's true that the "schooling" from an earlier age contributes to developing in that direction.)

...Or because they have other rewards, such as "video drugs" and care less about that candy. Or because they could get a candy anyway afterwards and they are not attracted.

Also are those two candies (eaten at once) a bigger gratification? In practice they would be eaten for about the same time - do the little children count that? (If they saved it for another day, that would be a "delayed gratification").

Do the children understand "more" the same way and also *do they believed, that the waiting costs less*, and when they wait, *do they wait because they like more to please the authority figure who gives them the task*?

One thing that the test measures is some "patience", assuming that it's by-definition suffered for "gratification", and namely for the eating of the candy itself.

The article mentions "not on medication for ADHD" and "attention", but I think that's ill-defined too, because the children who don't want to wait for *a candy* may do wait for something else *about which they do care* more and is of a "value" for them, i.e. I don't think the conclusions are transferable by default, *especially* since the child are young and probably do not always generalise themselves.

"Delaying gratification" for a setting of a subjectively accepted higher reward could be interpreted as more "GREED" and for the case of little a few-years-old children: a higher susceptibility to please the authority figure or to answer what she assumed she was expected to say.

Also I suspect that some of the children do not understand all of the conditions of what they were asked to do.


It is true that waiting as Patience and "sustained focus" is correlated to "higher IQ" and other test results in *some tests* - studying and "success" require patience.

It is true also, that pleasing the authority figures in human societies usually leads to "success" in the measures and values, defined by those authority figures.

However patience and "delayed gratification" are correlated also with less ability to contradict the order of the authority figure, which is correlated to less inclination towards critical thinking and creativity under authority pressure.

The rewards for those "delayed gratification" ones is to a bigger extend defined by their authorities and these children may have have accepted and adopted more deeply the values of their "experimenters" and are delaying gratification", they question less the truth of the values they have.

Similarly, some of the children who according to their experimenters "lacked self-control" (based on the experimenters definition), may actually have *REJECTED* or ignored the external control, imposed by the experimenter/teacher and thus they did what they wanted, instead of what they were supposed to do according to the experimenter's values and "gratification criteria" etc.

(I may have encountered similar thoughts in the past).

Wednesday, June 6, 2018

Кръг Artificial Mind | Circle Artificial Mind - AGI, машинно обучение, правене на филми и игри, събития, конференции, изследвания

Интервю с мен за новия ми проект: 

Фокус: изкуствен интелект, универсален изкуствен разум (Artificial General Intelligence, AGI), машинно обучение, правене на филми и игри; спорт, купон и др.

Продължение на опита с "мини-конференцията", която организирах през 2012 г. в Пловдив и "Дружество Разум" от юношеските ми години в началото на 2000-те + филмови, обществени и др. неща от "моя дух".

Историята е драматична и включва разпадналото се сдружение-хакерспейс и споделено работно място "Хакафе" (Hackafe Plovdiv). Споменават се някои от другите ми творчески занимания напоследък. - прочетете в интервюто.

Търсят се "екшън герои" за партньори. Като може би през есента, ако намеря достатъчно партньори, ще организираме втора мини-конференция. Засега един от старата компания от тогава одобри идеята.


Р: (...) Бихте ли разказали повече за "конференцията"?

Т: Разбира се, че думата "конференция" и претенциозното й име са употребени с чувство за хумор. Събирането също мина със смях, както си личи и на снимките и по "хавайската ми риза", но участниците бяха сериозни: Светлин и Даниел сега са докторанти по роботика и изкуствен интелект в Единбург; Орлин стана известен инженер роботист. Имахме моралната подкрепа на Петър Кормушев, тогава ръководител на изследователска група по роботика в Италианския научно-изследователски институт в Геноа, а на следващата година - носител на наградата "Джон Атанасов". Там бяхме още моя милост - авторът на първите два университетски курса по AGI в света, - един от най-добрите ми студенти от втория курс и двама други гости.  (...)

Wednesday, May 23, 2018

Бягство от прекрасния видео свят - статия във в-к "Пловдивски универстиет" от Тош

Излезе в бр. 3-4 в края на април. Благодаря на Тильо Тилев!

Статията ми е на стр. 22.

" Като бях малък, възрастните ни плашеха, че сме щели да си развалим очите от гледане на телевизия. Бях се стреснал, но не можех да спра да гледам, затова се опитвах поне да нама- ля вредата, като мигах и си държах клепачите затворени за по- дълго време. Скоро обаче пак се взирах нормално, защото май нещо ни будалкаха..."

An English title would be: "Escape from the brave video world"

Saturday, February 24, 2018

MIT creates a course in AGI - eight years after Todor Arnaudov at Plovdiv University

Well, good job, MIT - just 8 years after my first course in AGI at Plovdiv Univesity and 7 after the second. I'd like to thank to my Alma mater and prof. Manev and prof. Mekerov, too.  Виж Програмата на курса по Универсален Изкуствен Разум (УИР) на български от страницата на университета, и на следващия курс. Лекции, материали (Lectures, materials - some in English):

MIT's course:

It's open and it seems it has started a month ago in January 2018.

Watch the lectures in Lex Fridman's channel on Youtube.

Me with my best students at the end of the first course:

* The shirt with the caption "Science = Art + Fun" is from my participation at the science communication competition "FameLab Bulgaria" in the previous year ("Лаборатория за слава").

Right, I looked like if I were the youngest student... :D


( Edit: I noticed I've first written "seven years since the first", but the first one was in 2010. So it's almost 8 years - spring 2010)

Friday, February 16, 2018

Тошко 2.070 - управление на синтезатора на реч чрез Python и PHP | Toshko 2.070 speech synthesizer

Новата версия на синтезатора за желаещи да експериментират слуша за команда с изказване, записва изказаното в mp3-файл и го връща на клиентското приложение, което решава дали и как да го просвири. Примерният скрипт на Python извежда двоичните данни в конзолата, а този на PHP ги записва на диска и просвирва файла.

Изтегли Тошко 2.070

Сайт на Тошко 2


1. Тошко 2.070 (EXE) -  нов изпълним файл.

2. Скриптове на Python2.7

Папка /scripts

Може да се наложи да инсталирате и няколко библиотеки: Installing python modules


Ако автоматичното настройване на пътищата е наред и не ползвате и други версии (напр. Python 3.x), то изписването на "python" в конзолата в папката на скриптовете би трябвало да извика интерпретатора:

> python

Ако не се получава и не ви се работи с настройки на PATH/Environment, задайте пълния път:

('Thu Feb 15 17:57:20 2018', 'Toshko POST Server Starts - localhost:8079') - POST-сървър -  Изпращане на съобщение с изказването към синтезатора

Отворете и настройте пътя до папката, където синтезаторът извежда mp3-файловете

Например, ако сте инсталирали програмата в  C:\\program files\\toshko2\\
то конфигурирайте като допишете пътя:

mp3Path = "C:\\program files\\toshko2\\mp3\\"
(Важно - ползвайте двойни наклонени черти).

3. Скриптове на PHP
Папка /scripts


Нужно да имате включен PHP-сървър, напр. чрез WAMP.

Поставете файловете в подходящата папка на сървъра, напр. C:\WAMP\WWW\WEB

След това скриптовете се извикват през четец (браузър):


Правих тези тестове преди около 2 години, но тогава не ги публикувах, защото в този вид изискват технически донастройки, и тъй като не е както би трябвало да бъде. Сега просто се праща текст, настройките се правят само от приложението.

Желателното положение е дистанционното управление да контролира говорния апарат във всички детайли, така че да може и обработката на текста - ударения, паузи, фрази, анализ и пораждането на интонационни контури и пр. да се изведат в по-лесен за промяна вид, например на Питон.

За съжаление не ми се занимаваше да го продължа тогава -  "някой ден".

 called do_POST?
tosh-PC - - [15/Feb/2018 17:24:17] "POST / HTTP/1.1" 200 -
{'@consonants': ['0.5'], '@speed': ['1.0'], '@say': ['\xd0\x98\xd1\x81\xd0\xba\x
d0\xb0\xd0\xbc \xd0\xb4\xd0\xb0 \xd0\xba\xd0\xb0\xd0\xb6\xd0\xb0 \xd0\xbd\xd0\xb
5\xd1\x89\xd0\xbe...'], '@vowels': ['2.0']}
['\xd0\x98\xd1\x81\xd0\xba\xd0\xb0\xd0\xbc \xd0\xb4\xd0\xb0 \xd0\xba\xd0\xb0\xd0
\xb6\xd0\xb0 \xd0\xbd\xd0\xb5\xd1\x89\xd0\xbe...']
Before say1251 = ...
BUSY... Communicating with Toshko...

before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [58, 58, 58, 102, 44, 49, 52, 44, 48, 46, 53, 44, 55, 44, 50, 46, 48,
59, 36, 36, 36, 112, 121, 116, 104, 111, 110, 83, 97, 121, 52, 51, 56, 51, 57,
50, 54, 59, 10, 200, 241, 234, 224, 236, 32, 228, 224, 32, 234, 224, 230, 224, 3
2, 237, 229, 249, 238, 46, 46, 46])
OK! READY for new requests



before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [36, 36, 36, 107, 117, 114, 49, 46, 109, 112, 51])
before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [36, 36, 36, 197, 245, 238, 238, 238, 46, 46, 46, 32, 195, 250, 231,
32, 227, 238, 235, 255, 236, 32, 46, 46, 46, 49, 50, 51, 52, 53, 54, 55, 56, 57

Monday, February 5, 2018

Sensori-Motor Grounding of Surface Properties - an Exercise Trace of the Thoughts and Tree of Questions by Todor from 2011 in AGI List

After the selected emails from 2012 where I discussed generalization and the real meaning of "invariance" in 3D, I'm sharing another selected letter from the AGI list on general intelligence and sensory-motor grounding and its connection to symbolic/abstract representation. The "glue" of this to a real system is the specific processing environment, which applies the sensori-motor mapping and gradually traverses the possible actions ("affordances") within a specific input space (a universe, an environment) and maps them to the sensori data hierarchically with incremental complexity. It should gradually increase for example the number and the range - e.g. involving more modalities of input and output (action), wider range in space and time - of the parameters defining a particular current "highest complexity" conception, which in the example below are eventually represented as words ("a house", "a surface", ...).

The system's "motor" should be "ignited" to explore and the exploration should generate the higher level representations out of the simple sensory inputs like the ones explained below.

Note that the learning - the inductive, discovery - process starts from the end of this "trace of the thoughts". The reasoning was to show that it is possible and even easy/obvious to introspectively trace it from the general conceptions down to the specific and how "low complexity" these abstractions actually were.

See also:

Todor's: "Chairs, Buildings, Caricatures, 3D-Reconstruction..." and that semantic analysis exercise back from March 2004 Semantic analysis ...

Kazachenko's "Cognitive Algorithm" which claims to incrementally add "new variables".

from Todor Arnaudov twenkid @ ...
date Sun, Sep 11, 2011 at 3:12 AM
subject Re: [agi] The NLP Researchers cannot understand language. Computers could. Speech recognition plateau, or What's wrong with Natural Language Processing? Part 3
mailed-by (....)

IMHO sensorimotor approach has definitely more *general* input and output - just "raw" numbers in a coordinate system, the minimum overloaded semantics.

[Compared to "purely symbolic". Note that sensori-motor doesn't exclude symbolic - this is where it converges after building a sufficiently high or long hierarchy (inter-modal, many processing stages, building an explicit discrete dictionary of patterns/symbols) and when it aims at "generality", "compression" or partially arbitrary point of view of the evaluator who's deciding whether something is "symbolic". The way sensori-motor data is processed may also be represented "symbolically", "mathematically" (all code in a computer is supposed to be). The "not symbolic" sense is that it's aimed to be capable of mapping the structure of the emerging conceptions, correlations, "patterns" ("symbols"...) to a chain or a network, or a system of discoveries and correlations within a spatio-temporal domain in the "general" "raw sensory input" from the environment, or one that can be mapped to such input. On the other hand the "purely symbolic" combinations have no explicit connection to that kind of "most general" "raw input". Note, 7.1.2018]
That way the system has higher resolution of perception and causality/control (my terms), which is how close the output/input can be recovered to the lowest laws of physics/properties of the environment where the system acts/interacts.

I think "fluidity"/"smoothness" that Mike talks about is related to the gradual steps in resolution of generalization and detail of patterns which is possible if your start with the highest available sensory resolution and gradually abstract details while keeping relevant details at relevant levels of abstraction, and using them on demand when you need them to maximize match/precision/generality. When system starts from too high an abstraction, most of the details are gone.

[However, that's not that bad by default, because what remains is the most relevant - the spaces of the affordances are minimal and easily searchable in full, even introspectively. See below. Note, 5.2.2018]

BTW, I did this little exercise to trace what really some concepts mean:

[Starting randomly from some perception or ideas, thoughts and then the "Trace of the thoughts" process should converge down to the basic sensori-motor records and interactions from which the linguistic and abstract concepts have emerged and how.]...

What is a house?
- has (door, windows, chairs, ... ) /

What is a door?

has(...)... //I am lazy here, skip to few lines below...

is (wood, metal, ...)

What is wood?

is(material, ...)

What is material?

What is surface?

What are material properties?

-- Visual, tactile; weight (force); size (visual, tactile-temporal, ...)

has(surface, ...)

is(smooth, rough, sharp; polished...)

What are surface properties? //An exercise on the fly

- Tactile sensory input records (not generalized, exact records)

- Visual sensory input -- texture, reflection (that's more generalized, complex transformations from environmental images)

- Visual sensory input in time -- water spilled on the surface is being absorbed (visual changes), or it forms pools

-- How absorption is learned at first?

---- Records of inputs, when water [was] spilled, the appearance of the surface changes, color gets darker (e.g. wool)

-- How not absorbing surface is discovered?

---- Records of inputs, when water spilled, appearance of the surface changes; pool forms
------  [pools are] changes in brightness, new edges in the visual data [which are] marking the end of the pools

-- How is [it] learnt that the edges of the pools are edges of water?
---- [By] Records of tactile inputs -- sliding a finger on the surface until it touches the edge, the finger gets wet

-- What is "wet"?

---- Tactile/Thermal/Proprioception/Temporal records of sensory input:

---- changing coordinates of the finger

---- finger was "dry"

---- when touching the edge:

------ decrease in temperature of the finger [is] detected

-- when [the "wet"] finger touches another finger, ... or other location, thermal sensor indicates decrease of other's temperature as well

-- when [the] finger slides on the surface when wet, it feels "smoother" than when "dry"

[What is "smoother"?]

-- "Smoother" is - Temporal (speed), proprioception + others

-- The same force applied yields to faster spatial speed [that maps to "lower friction"]

[What is "faster [higher] speed"?]

-- "Faster"[higher] speed is:

---- [When] The ratio of spatial differences between two series of two adjacent samples is in favor of the faster.

-- The friction is lower than before touching the edge of the pool.

[What is "friction"?]

-- Friction is:

-- Intensity of pressure of receptor, located in the finger.

Compared to the pressure recorded from other fingers, the finger which is being
 sliding measures higher pressure than the other fingers


So yes, it seems we can define it with NL [Natural Language], but eventually it all goes back down to parameters of the receptors -- which is how it got up first. Also we do understand NL definitions, because we *got* general intelligence.

An AGI baby doesn't need to have defined "wet" as "lower temperature" etc. -- it just touches, slides a finger etc. keep the record, and generalize on it.

Then it associates it with the word "wet" which "adults" w (....)

--- THE END ---

Thursday, January 4, 2018

The lack of operational hierarchical structure in the Deep Learning ANN neural networks

A survey paper on the issues of Deep Learning by Gray Marcus:

The author has a valuable mix of expertise both in the ANN development and in linguistics, developmental psychology, cognitive psychology.

There are good points on the lack of real hierarchical structure in (current/regular) DL/ANN, accenting that they are actually "flat", even though there are "layers" which gives a confusing impression.

"To a linguist like Noam Chomsky, the troubles Jia and Liang documented would be
unsurprising. Fundamentally, most current deep-learning based language models
represent sentences as mere sequences of words, whereas Chomsky has long argued that
language has a hierarchical structure, in which larger structures are recursively
constructed out of smaller components"

G.Marcus p.9

See also Jeff Hawkins point since 2004 "On Intelligence"/Numenta, Dileep George's Vicarious, Boris Kazachenko's "Cognitive Algorithm"; the old Hierarchical Markov Models; probably many other researchers, also myself since my early 2000s writings where even I as a teenager have realized that human general intelligence faculty is a Hierarchical simulator and predictor of virtual universes.

The ANNs (without being put in another system/organization) lack operational structure.

Good survey and discussion of areas where DL fails and emphasis of the lack of transfer of learning, i.e. that the networks are not general intelligence and don't "understand" the concepts (the "overattribution" for DeepMind's Atari-player discovery of "tunnels", see p.9)

* However I don't like the pretentiousness in some parts of the article while discussing trivialities and proposing alternatives with 15+ years(?) delay with a pinch of academic glamour or so.

E.g. unsupervised learning (not that common boring classification of fixed images) and self-organization, incremental complexity/"self-improvement" - "Seed AI"; hierarchical operational structure, "symbol grounding" - emergence of generalizaions/"symbols", "abstract thought" from the sensory processing; different levels of abstraction - including "symbolic"; causality understanding (prediction, simulation of "virtual universes"); general/universal game playing; application of general educational tests/measures... (Since AGI is about that; the term "human level (general) (artificial) intelligence" was used in the past) etc. (Not just "pattern matching" of synthetic static tests.)

The above is what AI was always supposed to be about - AGI, - at least as some talented teenagers and others realized and shouted it to the world in the early 2000s and dismissed the poisoned term "AI". Everything was called "AI" back then - somewhat similar today, AI is ubiquitous, yet not general and lacking a personal wholeness.

These suggestions and conclusions would be informative for hard-core AI-er, though (programmers-mathematicians type), it seems the "general"-... part has still a way to go as a concept for the "mainstream" developers community with its "Narrow AI" attitudes**.


** Narrow AI - another forgotten term, which on second thought is still actual. Current DL is in fact "narrow AI", each network is trained for a specific class of problems ("classification") and as well explained in the paper can't generalize concepts and transfer the knowledge to different domains.

*** I "don't like" my own pretentiousness, too, but I consider it funny and ironic, rather than serious like in the paper. :P

**** Thanks to Preslav Nakov for sharing the link!


Compare the educational test proposals with one of the first articles in this blog, a decade ago:

Wednesday, November 14, 2007
Faults in Turing Test and Lovelace Test. Introduction of Educational Test.

I didn't explicitly defined the exact kinds of tests, because they were already given in details in the appropriate textbooks about the set of the respective expected skills and knowledge for the respective age or educational level.


The article reminds me of the series of articles "What's wrong with Natural language processing?", starting from the year 2009:


Vicarious' demo video summarizing ANN reinforcement learning faults, and their Schema Networks:

Sunday, December 31, 2017

CapsNet, capsules, vision as 3D-reconstruction and re-rendering and mainstream approval of ideas and insights of Boris Kazachenko and Todor Arnaudov

First impressions on Hinton et al. "Capsules"/CapsNet update to the convolutional NN/CNN that got popular recently with their latest paper on Dynamic routing.

1. Hinton approves Boris Kazachenko's old claim and criticism to ANN in his Cognitive Algorithm (CogAlg) writings that the coordinates of the input should be preserved and that this is one of the CNN/ANN design faults.

2. The "Dynamic routing" sounds to me as their way to generate "new syntax" in CogAlg terms, as  different ways for evaluation of the input. Boris disagreed though, he corrected that it maps to his "skipping" (of levels).

3. The intended focus on particular smaller-region features per "capsule"/"group of neurons" ~ (mini-)columns reminds me of  Numenta/Jeff Hawkins' approach, i.e.: a) cortical algorithm - a structure of functional modules, not just "neurons" b) higher modularity

All of the above seems as steps ahead to finer granularity of the patterns that the systems would model.

4. Besides, if I understand correctly, Hinton agrees with my claim/observation in early 2012 that vision/(object recognition) is ultimately 3D-reconstruction* and comparing normalized 3D-models of various level of detail - "inverse graphics".

My view* is that "understanding" is the ability of the system to re-render what it sees with adjusted or with changed parameters, which, in their terms seems to map to keeping the "equivariance" (or "match" in CogAlg terms), or as I see it: to simulate/traverse the pattern in the space of its possible states.

That’s according to:“Does the brain do Inverse graphics”, published in Youtube on 25.08.2015, a record from a lecture in a “Graduate summer school”, Toronto, 12.7.2012” from:

Slides by Kyuhwan Jung, 9/11/2017: ...p.8: “...We need equivariance, not invariance

* To me it's supposed to be obvious, I think it's obvious to cognitive psychologists (Hinton mentions the mental rotation tests), to artists, to researchers, to ones who study human vision and optical illusions.

Another earlier article of mine from 1.1.2012:

 Colour Optical Illusions are the Effect of the 3D-Reconstruction and Compensation of the Light Source Coordinates and Light Intensity in an Assumed 2D Projection of a 3D Scene


 However it wasn't obvious for example in the AGI community below and if one is doing messy ANN where there's no reconstruction, but "weights", "convolutions". All were talking about "invariance".

** Boris' comment on capsules in his site:

"Actually, recently introduced “capsules” also output multivariate vectors, similar to my patterns. But their core input is a probability estimate from unrelated method: CNN, while all variables in my patterns are derived by incrementally complex comparison. In a truly general method, the same principles must apply on all stages of processing. And additional variables in capsules are only positional, while my patterns also add differences between input variables. That can’t be done in capsules because differences are not computed by CNN.


Archive from the AGI List from the year 2012

At that time the "invariance" was a buzz-word in the AGI email list. See more below in the digest I've prepared from 4 threads from that era back in 2012. I've not visited that place since a long time, the emails should be there if it's still active.

1. Generalization – Food and Buildings, 1/2012
2. General Algorithms or General Programs, 4/2012
3. Generalization - Chairs and Stools , 10/2012
4. Caricatures, 5/2012

Read in:  Chairs, Caricatures and Object Recognition as 3D-reconstruction (2012)

The 4-th email from the "General algorithms..." thread:

Todor Arnaudov Fri, Apr 27, 2012 at 1:12 AM

I don't know if anyone on this discussion realized, that "Invariance" in vision is actually just a

- 3D-reconstruction of the scene, including light source and the objects

- Also colours/shades and the textures (local/smaller higher resolution models) are available (for discrimination based on this, may be quicker/needed for objects which are otherwise geometrically matched)

[+ 16-7-2013 - conceptual “scene analysis”, “object recognition” involves some relatively arbitrary, or just flexible, selection criteria for the level of generalization for the usage of words to name the “items” in the scene. To Do: devise experiments with ambiguous objects/scenes, sequences. … see “top-down”, … emails 9, 14, 15]

If the normalized 3D-models (preferably to absolute dimensions), lights and recovered original textures/color (taking into account light and reflexion) are available, everything can be compared perfectly and doesn't require anything special, and no "probabilities" or something. The textures and light most of the time don't even alter the essential information - the 3D-geometric structure.

"2D" is just a crippled 3D

"Invariants" in human fully functional vision are just those 3D-models (or their components, "voxels:) built in a normalized space,the easiest approach for quick comparison is voxels, it might be something mixed with triangles, of course textures and colours also participate.

Every 3D-model has a normalized position per its basis, and also some characteristic division of major planes and position between the major planes, and there are "intuitive" ways to set the basis --> gravity/the ground plane foundations, which is generalized to "bottom", i.e.:

-- The "bottom" of an object, which faces the ground, is the part of the image of the object which projects on the "bottom" of the scanlines of the retina, because that's inferred for the first objects, which always have stable touch with the "ground".

When generalizing or specializing, the resolution of the 3D-models to be compared is changed (see the thread where I gave example of how the concept of a "building" is produced), at particular stage every two 3D-models match, eventually converge to a cube, or a plane.

IMO in fact brain is not very good in further mental rotation of those models, yeah we know those IQ tests, but humans do it very slowly and the tests consist of very few crossing planes, because it gets too complex.

Con: "How can you say that it's "just" 3D-reconstruction? That's so compex!"

- Well, one may think so only if she was not familiar with the triangulation (photogrammetry dates back to 19-th century) and/or the spectacular work of Mark Polleyfeys.

"How do you recognize that this is your chair, if it's upside down and you haven't seen it before"

Like the mistakes about generalization - a "chair" is a generalized concept, it's not a pixel-by-pixel image, rough 3D-models are compared for finding a match. And matching is a biggest number of high degree of match of the size relations of the boxes, planes color (after light correction) + texture, to the match to those of the chair from the previous day, than to those boxes, planes etc. of "chairs" found elsewhere, and of any other "objects".

A "chair" [a stool] generally is just:

-- A plane which is perpendicular to the "ground" direction vector, which is a vector which is parallel to "gravity" - that is the vector where objects go when let without a support;
-"support" is a vector consisting of "solid" connection (of forces, impacts) to the "ground" which when existing prevents objects from getting closer to the "ground" (falling);
- the "ground" is a plane where objects stop their motion (changes of coordinates between subsequent samples) if left without support or impacting by other moving "things", etc.

Most chairs can be reduced to a few solids and still be recognizable.

AGI is way simpler than it seems.

Saturday, December 30, 2017

Hackafe Logo and Over the Moon+ Shader Аrt on Shadertoy

1. Hackafe Logo -  and its sad-funny story

Анимация с логото на  пловдивския хакерспейс "Хакафе" и тъжно-смешен разказ за историята му: ..

2. Over the Moon+  BigWings, extended by Tosh/Twenkid


3. Craters by NickWest, mapped to a sphere:

Thursday, September 28, 2017

XAI - or explainable AI - a new buzz word

XAI or eXplainable AI, that's the new way of DARPA for addressing the problem that intelligence is about understanding, analysis, causality, prediction/planning etc.

That reflects or goes along with political tendencies which would limit the usage of CNN/RNN/Deep Learning systems when they can't give "reasonable" explanation of their decisions, for example in an automatic selection process.

That article gives more info about the new laws:

The political part is silly, though, for example with its advocacy that the lawyers are the ultimate gods of the Unvierse or the "good and evil" examples. :) 

Humans are also susceptible to the same faults of choosing the prevailing opinion of "supervised learning" (authorities) and "reinforcement learning" (rewards and punishments to direct opinions and decisions).


ОИИ - ой-ой-и--... Обясним изкуствен интелект - нова дума за това което ИИ би трябвало винаги да бъде, когато се изгражда на етапи и с осъзнаване и език.

Досега - универсален изкуствен разум - УИР, УИИ, AGI, ...