Friday, June 29, 2018

Program Synthesis, Self-Programming, DeepCode - CodeGen - Synthesis of Everything - Казбород


Bulgarian research group in Code Synthesis in Switzerland


Do you know about a trendy research group in Code Synthesis that runs in Zurich, Switzerland? Its leaders are several Bulgarian researchers from ETH with a spin-off start-up called "DeepCode". They use "Big Code", such as GitHub and neural networks, however combined with other methods.

A recent article, an interview witn Martin Vechev, one of the leaders of the DeepCode


https://www.republik.ch/2018/06/27/programmier-dich-doch-selbst

(Mein Deutsch ist nicht so gut, ebenfalls, aber... Google Translate from German to English is good enough. :)  )

Older Notes from 5.2.2018 :


http://www.srl.inf.ethz.ch/riai.php

A recent University lecture course on Reliable and Interpretable Artificial Intelligence... (Fall 2017) ... taught in ETH Zurich by several fellow bulgarian researchers:

Deep Code, Code Synthesis, AI

Thanks to Ivan Dzheferov for the links!

...

Todor and Code Synthesis



For the record, I've been in the Code Synthesis domain, too, but it "doesn't count yet" publicly, because it was not in an academic environment and style, it was part of my general AGI notes and research activities and I haven't published it yet, neither completed it to something really practical.

Note, however, that when it becomes practical as I foresee it, it would "explode", because my imagination aims at "AGI-Complete" code synthesis which is integrated with the "general AGI". So far the ideas were not based on DNN, millions-lines of code style, and were more "conceptual", aiming at hitting a lot of targets with a few bullets. :) For example - by writing a little code and letting it find and write the rest itself.

However I've switched focus. A friend and a developer, Ivan, notified me about that research group some time ago - because he knew my "radical" claims, maybe since 4-5 years*, that I believed that the software development had no future as a domain and profession.

Software should be developed (generated) automatically and certainly not the way it was done either then or now.

It was quite a claim to express in front of developers, so I avoided to do it in public :D, with a few exceptions, since I expected to be ridiculed by people who had no clue. They believed  that "AI"* was a "science fiction", computers "can't do that". "AI" - because even today the "AGI" is not a popular term.
In an interview with Martin Vechev for a Bulgarian media, he mentions AGI at the end as a distant goal with his Bulgarian translation ("Всеобщ изкуствен интелект"; моят термин беше "Универсален изкуствен разум" и др. варианти, както и само "Изкуствен разум" - за да се разграничи от опорочения "интелект").

I'd better focus back. :))

[Article trimmed ... To be continued...]

Wednesday, June 27, 2018

"Delayed gratification" is an ILL-Defined Concept

https://www.technologynetworks.com/neuroscience/news/the-marshmallow-test-todays-kids-show-more-self-control-305353

The article claims that today's children have shown more self control.

I question the abstract settings of that kind of experiments, the generality and the direction of the conclusions.

Comment: 

The setting is ill-defined, the concept of "delayed gratification" itself, too, especially for little children. Too general conclusions. The experimenters assume that the children believe that two candies later are "a bigger gratification" than one. Why should the child do? Why not a child be satisfied with just ONE now and then go do something else, seek for another, different, more meaningful and interesting gratification - such as play? If it's satisfied anyway, why should her wait?

"A higher salary is better" - is it, at what other cost; are the money (or the number of candies) the only unquestionable "gratification" measure.

What if the children didn't understand the question like the experimenter has defined it? What if they waited more, because they were more suggestive and obedient to do what they were told. (It's true that the "schooling" from an earlier age contributes to developing in that direction.)

...Or because they have other rewards, such as "video drugs" and care less about that candy. Or because they could get a candy anyway afterwards and they are not attracted.

Also are those two candies (eaten at once) a bigger gratification? In practice they would be eaten for about the same time - do the little children count that? (If they saved it for another day, that would be a "delayed gratification").

Do the children understand "more" the same way and also *do they believed, that the waiting costs less*, and when they wait, *do they wait because they like more to please the authority figure who gives them the task*?

One thing that the test measures is some "patience", assuming that it's by-definition suffered for "gratification", and namely for the eating of the candy itself.

The article mentions "not on medication for ADHD" and "attention", but I think that's ill-defined too, because the children who don't want to wait for *a candy* may do wait for something else *about which they do care* more and is of a "value" for them, i.e. I don't think the conclusions are transferable by default, *especially* since the child are young and probably do not always generalise themselves.



"Delaying gratification" for a setting of a subjectively accepted higher reward could be interpreted as more "GREED" and for the case of little a few-years-old children: a higher susceptibility to please the authority figure or to answer what she assumed she was expected to say.

Also I suspect that some of the children do not understand all of the conditions of what they were asked to do.

...

It is true that waiting as Patience and "sustained focus" is correlated to "higher IQ" and other test results in *some tests* - studying and "success" require patience.

It is true also, that pleasing the authority figures in human societies usually leads to "success" in the measures and values, defined by those authority figures.

However patience and "delayed gratification" are correlated also with less ability to contradict the order of the authority figure, which is correlated to less inclination towards critical thinking and creativity under authority pressure.

The rewards for those "delayed gratification" ones is to a bigger extend defined by their authorities and these children may have have accepted and adopted more deeply the values of their "experimenters" and are delaying gratification", they question less the truth of the values they have.

Similarly, some of the children who according to their experimenters "lacked self-control" (based on the experimenters definition), may actually have *REJECTED* or ignored the external control, imposed by the experimenter/teacher and thus they did what they wanted, instead of what they were supposed to do according to the experimenter's values and "gratification criteria" etc.

(I may have encountered similar thoughts in the past).

Wednesday, June 6, 2018

Кръг Artificial Mind | Circle Artificial Mind - AGI, машинно обучение, правене на филми и игри, събития, конференции, изследвания


Интервю с мен за новия ми проект:

http://mind.twenkid.com 

Фокус: изкуствен интелект, универсален изкуствен разум (Artificial General Intelligence, AGI), машинно обучение, правене на филми и игри; спорт, купон и др.


Продължение на опита с "мини-конференцията", която организирах през 2012 г. в Пловдив и "Дружество Разум" от юношеските ми години в началото на 2000-те + филмови, обществени и др. неща от "моя дух".

Историята е драматична и включва разпадналото се сдружение-хакерспейс и споделено работно място "Хакафе" (Hackafe Plovdiv). Споменават се някои от другите ми творчески занимания напоследък. - прочетете в интервюто.

Търсят се "екшън герои" за партньори. Като може би през есента, ако намеря достатъчно партньори, ще организираме втора мини-конференция. Засега един от старата компания от тогава одобри идеята.

(...)

Р: (...) Бихте ли разказали повече за "конференцията"?

Т: Разбира се, че думата "конференция" и претенциозното й име са употребени с чувство за хумор. Събирането също мина със смях, както си личи и на снимките и по "хавайската ми риза", но участниците бяха сериозни: Светлин и Даниел сега са докторанти по роботика и изкуствен интелект в Единбург; Орлин стана известен инженер роботист. Имахме моралната подкрепа на Петър Кормушев, тогава ръководител на изследователска група по роботика в Италианския научно-изследователски институт в Геноа, а на следващата година - носител на наградата "Джон Атанасов". Там бяхме още моя милост - авторът на първите два университетски курса по AGI в света, - един от най-добрите ми студенти от втория курс и двама други гости.  (...)







Wednesday, May 23, 2018

Бягство от прекрасния видео свят - статия във в-к "Пловдивски универстиет" от Тош


Излезе в бр. 3-4 в края на април. Благодаря на Тильо Тилев!

Статията ми е на стр. 22.

" Като бях малък, възрастните ни плашеха, че сме щели да си развалим очите от гледане на телевизия. Бях се стреснал, но не можех да спра да гледам, затова се опитвах поне да нама- ля вредата, като мигах и си държах клепачите затворени за по- дълго време. Скоро обаче пак се взирах нормално, защото май нещо ни будалкаха..." 

https://uni-plovdiv.bg/uploads/site/vestnik/2018/vestnik_br_3-4_2018.pdf


An English title would be: "Escape from the brave video world"

Saturday, February 24, 2018

MIT creates a course in AGI - eight years after Todor Arnaudov at Plovdiv University

Well, good job, MIT - just 8 years after my first course in AGI at Plovdiv Univesity and 7 after the second. I'd like to thank to my Alma mater and prof. Manev and prof. Mekerov, too.  Виж Програмата на курса по Универсален Изкуствен Разум (УИР) на български от страницата на университета, и на следващия курс. Лекции, материали (Lectures, materials - some in English): http://research.twenkid.com

MIT's course:

https://agi.mit.edu/

It's open and it seems it has started a month ago in January 2018.

Watch the lectures in Lex Fridman's channel on Youtube.



Me with my best students at the end of the first course:



* The shirt with the caption "Science = Art + Fun" is from my participation at the science communication competition "FameLab Bulgaria" in the previous year ("Лаборатория за слава").

Right, I looked like if I were the youngest student... :D

...

( Edit: I noticed I've first written "seven years since the first", but the first one was in 2010. So it's almost 8 years - spring 2010)

Friday, February 16, 2018

Тошко 2.070 - управление на синтезатора на реч чрез Python и PHP | Toshko 2.070 speech synthesizer

Новата версия на синтезатора за желаещи да експериментират слуша за команда с изказване, записва изказаното в mp3-файл и го връща на клиентското приложение, което решава дали и как да го просвири. Примерният скрипт на Python извежда двоичните данни в конзолата, а този на PHP ги записва на диска и просвирва файла.

Изтегли Тошко 2.070

Сайт на Тошко 2

Новото:

1. Тошко 2.070 (EXE) -  нов изпълним файл.

2. Скриптове на Python2.7

Папка /scripts

Може да се наложи да инсталирате и няколко библиотеки: Installing python modules

win32api
win32gui
win32con

Ако автоматичното настройване на пътищата е наред и не ползвате и други версии (напр. Python 3.x), то изписването на "python" в конзолата в папката на скриптовете би трябвало да извика интерпретатора:


> python toshko.py

Ако не се получава и не ви се работи с настройки на PATH/Environment, задайте пълния път:

>"C:\Python27\python.exe" toshko.py
('Thu Feb 15 17:57:20 2018', 'Toshko POST Server Starts - localhost:8079')


toshko.py - POST-сървър
wmcopydata.py -  Изпращане на съобщение с изказването към синтезатора

Отворете toshko.py и настройте пътя до папката, където синтезаторът извежда mp3-файловете

Например, ако сте инсталирали програмата в  C:\\program files\\toshko2\\
то конфигурирайте toshko.py като допишете пътя:

mp3Path = "C:\\program files\\toshko2\\mp3\\"
(Важно - ползвайте двойни наклонени черти).

3. Скриптове на PHP
Папка /scripts

say0.php
say1.php

Нужно да имате включен PHP-сървър, напр. чрез WAMP.

Поставете файловете в подходящата папка на сървъра, напр. C:\WAMP\WWW\WEB

След това скриптовете се извикват през четец (браузър):

http://localhost/web/say0.php



Правих тези тестове преди около 2 години, но тогава не ги публикувах, защото в този вид изискват технически донастройки, и тъй като не е както би трябвало да бъде. Сега просто се праща текст, настройките се правят само от приложението.

Желателното положение е дистанционното управление да контролира говорния апарат във всички детайли, така че да може и обработката на текста - ударения, паузи, фрази, анализ и пораждането на интонационни контури и пр. да се изведат в по-лесен за промяна вид, например на Питон.

За съжаление не ми се занимаваше да го продължа тогава -  "някой ден".

 called do_POST?
tosh-PC - - [15/Feb/2018 17:24:17] "POST / HTTP/1.1" 200 -
{'@consonants': ['0.5'], '@speed': ['1.0'], '@say': ['\xd0\x98\xd1\x81\xd0\xba\x
d0\xb0\xd0\xbc \xd0\xb4\xd0\xb0 \xd0\xba\xd0\xb0\xd0\xb6\xd0\xb0 \xd0\xbd\xd0\xb
5\xd1\x89\xd0\xbe...'], '@vowels': ['2.0']}
['\xd0\x98\xd1\x81\xd0\xba\xd0\xb0\xd0\xbc \xd0\xb4\xd0\xb0 \xd0\xba\xd0\xb0\xd0
\xb6\xd0\xb0 \xd0\xbd\xd0\xb5\xd1\x89\xd0\xbe...']
(...)
pythonSay4383926
Before say1251 = ...
BUSY... Communicating with Toshko...
:::f,14,0.5,7,2.0;$$$pythonSay4383926;

wmcopydataB(say)
before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [58, 58, 58, 102, 44, 49, 52, 44, 48, 46, 53, 44, 55, 44, 50, 46, 48,
59, 36, 36, 36, 112, 121, 116, 104, 111, 110, 83, 97, 121, 52, 51, 56, 51, 57,
50, 54, 59, 10, 200, 241, 234, 224, 236, 32, 228, 224, 32, 234, 224, 230, 224, 3
2, 237, 229, 249, 238, 46, 46, 46])
60
25105400
pythonSay4383926
(...)\mp3\pythonSay4383926.wav.mp3
OK! READY for new requests

.................

"C\Python27\python.exe" wmcopydata.py

wmcopydataB(say)
before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [36, 36, 36, 107, 117, 114, 49, 46, 109, 112, 51])
11
20800528
wmcopydataB(say)
before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [36, 36, 36, 197, 245, 238, 238, 238, 46, 46, 46, 32, 195, 250, 231,
32, 227, 238, 235, 255, 236, 32, 46, 46, 46, 49, 50, 51, 52, 53, 54, 55, 56, 57
)
34
20800688



Monday, February 5, 2018

Sensori-Motor Grounding of Surface Properties - an Exercise Trace of the Thoughts and Tree of Questions by Todor from 2011 in AGI List

After the selected emails from 2012 where I discussed generalization and the real meaning of "invariance" in 3D, I'm sharing another selected letter from the AGI list on general intelligence and sensory-motor grounding and its connection to symbolic/abstract representation. The "glue" of this to a real system is the specific processing environment, which applies the sensori-motor mapping and gradually traverses the possible actions ("affordances") within a specific input space (a universe, an environment) and maps them to the sensori data hierarchically with incremental complexity. It should gradually increase for example the number and the range - e.g. involving more modalities of input and output (action), wider range in space and time - of the parameters defining a particular current "highest complexity" conception, which in the example below are eventually represented as words ("a house", "a surface", ...).

The system's "motor" should be "ignited" to explore and the exploration should generate the higher level representations out of the simple sensory inputs like the ones explained below.

Note that the learning - the inductive, discovery - process starts from the end of this "trace of the thoughts". The reasoning was to show that it is possible and even easy/obvious to introspectively trace it from the general conceptions down to the specific and how "low complexity" these abstractions actually were.


See also:

Todor's: "Chairs, Buildings, Caricatures, 3D-Reconstruction..." and that semantic analysis exercise back from March 2004 Semantic analysis ...

Kazachenko's "Cognitive Algorithm" which claims to incrementally add "new variables".


from Todor Arnaudov twenkid @ ...
to agi@listbox.com
date Sun, Sep 11, 2011 at 3:12 AM
subject Re: [agi] The NLP Researchers cannot understand language. Computers could. Speech recognition plateau, or What's wrong with Natural Language Processing? Part 3
mailed-by gmail.com (....)

IMHO sensorimotor approach has definitely more *general* input and output - just "raw" numbers in a coordinate system, the minimum overloaded semantics.

[Compared to "purely symbolic". Note that sensori-motor doesn't exclude symbolic - this is where it converges after building a sufficiently high or long hierarchy (inter-modal, many processing stages, building an explicit discrete dictionary of patterns/symbols) and when it aims at "generality", "compression" or partially arbitrary point of view of the evaluator who's deciding whether something is "symbolic". The way sensori-motor data is processed may also be represented "symbolically", "mathematically" (all code in a computer is supposed to be). The "not symbolic" sense is that it's aimed to be capable of mapping the structure of the emerging conceptions, correlations, "patterns" ("symbols"...) to a chain or a network, or a system of discoveries and correlations within a spatio-temporal domain in the "general" "raw sensory input" from the environment, or one that can be mapped to such input. On the other hand the "purely symbolic" combinations have no explicit connection to that kind of "most general" "raw input". Note, 7.1.2018]
That way the system has higher resolution of perception and causality/control (my terms), which is how close the output/input can be recovered to the lowest laws of physics/properties of the environment where the system acts/interacts.

I think "fluidity"/"smoothness" that Mike talks about is related to the gradual steps in resolution of generalization and detail of patterns which is possible if your start with the highest available sensory resolution and gradually abstract details while keeping relevant details at relevant levels of abstraction, and using them on demand when you need them to maximize match/precision/generality. When system starts from too high an abstraction, most of the details are gone.

[However, that's not that bad by default, because what remains is the most relevant - the spaces of the affordances are minimal and easily searchable in full, even introspectively. See below. Note, 5.2.2018]

BTW, I did this little exercise to trace what really some concepts mean:


[Starting randomly from some perception or ideas, thoughts and then the "Trace of the thoughts" process should converge down to the basic sensori-motor records and interactions from which the linguistic and abstract concepts have emerged and how.]...

What is a house?
- has (door, windows, chairs, ... ) /

What is a door?

has(...)... //I am lazy here, skip to few lines below...

is (wood, metal, ...)

What is wood?

is(material, ...)

What is material?

What is surface?

What are material properties?

-- Visual, tactile; weight (force); size (visual, tactile-temporal, ...)

has(surface, ...)

is(smooth, rough, sharp; polished...)


What are surface properties? //An exercise on the fly

- Tactile sensory input records (not generalized, exact records)

- Visual sensory input -- texture, reflection (that's more generalized, complex transformations from environmental images)

- Visual sensory input in time -- water spilled on the surface is being absorbed (visual changes), or it forms pools

-- How absorption is learned at first?

---- Records of inputs, when water [was] spilled, the appearance of the surface changes, color gets darker (e.g. wool)

-- How not absorbing surface is discovered?

---- Records of inputs, when water spilled, appearance of the surface changes; pool forms
------  [pools are] changes in brightness, new edges in the visual data [which are] marking the end of the pools

-- How is [it] learnt that the edges of the pools are edges of water?
---- [By] Records of tactile inputs -- sliding a finger on the surface until it touches the edge, the finger gets wet

-- What is "wet"?

---- Tactile/Thermal/Proprioception/Temporal records of sensory input:

---- changing coordinates of the finger

---- finger was "dry"

---- when touching the edge:

------ decrease in temperature of the finger [is] detected

-- when [the "wet"] finger touches another finger, ... or other location, thermal sensor indicates decrease of other's temperature as well

-- when [the] finger slides on the surface when wet, it feels "smoother" than when "dry"

[What is "smoother"?]

-- "Smoother" is - Temporal (speed), proprioception + others

-- The same force applied yields to faster spatial speed [that maps to "lower friction"]

[What is "faster [higher] speed"?]

-- "Faster"[higher] speed is:

---- [When] The ratio of spatial differences between two series of two adjacent samples is in favor of the faster.

-- The friction is lower than before touching the edge of the pool.

[What is "friction"?]

-- Friction is:

-- Intensity of pressure of receptor, located in the finger.

Compared to the pressure recorded from other fingers, the finger which is being
 sliding measures higher pressure than the other fingers

...

So yes, it seems we can define it with NL [Natural Language], but eventually it all goes back down to parameters of the receptors -- which is how it got up first. Also we do understand NL definitions, because we *got* general intelligence.


An AGI baby doesn't need to have defined "wet" as "lower temperature" etc. -- it just touches, slides a finger etc. keep the record, and generalize on it.

Then it associates it with the word "wet" which "adults" w (....)

--- THE END ---

Thursday, January 4, 2018

The lack of operational hierarchical structure in the Deep Learning ANN neural networks


A survey paper on the issues of Deep Learning by Gray Marcus: https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf

The author has a valuable mix of expertise both in the ANN development and in linguistics, developmental psychology, cognitive psychology.

There are good points on the lack of real hierarchical structure in (current/regular) DL/ANN, accenting that they are actually "flat", even though there are "layers" which gives a confusing impression.

"To a linguist like Noam Chomsky, the troubles Jia and Liang documented would be
unsurprising. Fundamentally, most current deep-learning based language models
represent sentences as mere sequences of words, whereas Chomsky has long argued that
language has a hierarchical structure, in which larger structures are recursively
constructed out of smaller components"

G.Marcus p.9

See also Jeff Hawkins point since 2004 "On Intelligence"/Numenta, Dileep George's Vicarious, Boris Kazachenko's "Cognitive Algorithm"; the old Hierarchical Markov Models; probably many other researchers, also myself since my early 2000s writings where even I as a teenager have realized that human general intelligence faculty is a Hierarchical simulator and predictor of virtual universes.

The ANNs (without being put in another system/organization) lack operational structure.

Good survey and discussion of areas where DL fails and emphasis of the lack of transfer of learning, i.e. that the networks are not general intelligence and don't "understand" the concepts (the "overattribution" for DeepMind's Atari-player discovery of "tunnels", see p.9)



* However I don't like the pretentiousness in some parts of the article while discussing trivialities and proposing alternatives with 15+ years(?) delay with a pinch of academic glamour or so.

E.g. unsupervised learning (not that common boring classification of fixed images) and self-organization, incremental complexity/"self-improvement" - "Seed AI"; hierarchical operational structure, "symbol grounding" - emergence of generalizaions/"symbols", "abstract thought" from the sensory processing; different levels of abstraction - including "symbolic"; causality understanding (prediction, simulation of "virtual universes"); general/universal game playing; application of general educational tests/measures... (Since AGI is about that; the term "human level (general) (artificial) intelligence" was used in the past) etc. (Not just "pattern matching" of synthetic static tests.)

The above is what AI was always supposed to be about - AGI, - at least as some talented teenagers and others realized and shouted it to the world in the early 2000s and dismissed the poisoned term "AI". Everything was called "AI" back then - somewhat similar today, AI is ubiquitous, yet not general and lacking a personal wholeness.

These suggestions and conclusions would be informative for hard-core AI-er, though (programmers-mathematicians type), it seems the "general"-... part has still a way to go as a concept for the "mainstream" developers community with its "Narrow AI" attitudes**.

...

** Narrow AI - another forgotten term, which on second thought is still actual. Current DL is in fact "narrow AI", each network is trained for a specific class of problems ("classification") and as well explained in the paper can't generalize concepts and transfer the knowledge to different domains.

*** I "don't like" my own pretentiousness, too, but I consider it funny and ironic, rather than serious like in the paper. :P

**** Thanks to Preslav Nakov for sharing the link!

...

Compare the educational test proposals with one of the first articles in this blog, a decade ago:

Wednesday, November 14, 2007
Faults in Turing Test and Lovelace Test. Introduction of Educational Test.
https://artificial-mind.blogspot.bg/2007/11/faults-in-turing-test-and-lovelace-test.html

I didn't explicitly defined the exact kinds of tests, because they were already given in details in the appropriate textbooks about the set of the respective expected skills and knowledge for the respective age or educational level.

...

The article reminds me of the series of articles "What's wrong with Natural language processing?", starting from the year 2009:

https://artificial-mind.blogspot.bg/search?q=what%27s+wrong+with

(...)

Vicarious' demo video summarizing ANN reinforcement learning faults, and their Schema Networks: