Sunday, October 28, 2018

ДЗУ Стара Загора - 1985 и 1988 г. - DZU Stara Zagora Documentaries from 1985 and 1988 - Disk Drives and Robots

Български - по-долу.

Bulgarian documentaries from 1985 displaying the DZU Stara Zagora factory, producing hard disk drives, floopy drives, industrial robots and other high-tech electronics and electro-mechanical equipment.

The video below is in English, the rest are in Bulgarian.

DZU was  driven to its bankruptcy after Bulgaria was "liberated" from socialism in 1989, and together with the socialism, the country was also "liberated" from its highly developed and high-tech industry and its huge technological potential, being the biggest producer of computer electronics in the Eastern Bloc and having plenty of highly qualified staff.

In one of the 1985's film they mention that 95% of the production was for export, in the 1988 movie they enumerate 18 countries in 18 continents, including East Europe ones and Finland, Austria, Greece, West Germany, Switzerland, Italy, France, Spain, Netherlands, Nigeria, China, Brazil, India, Iran.

Note that most of the staff in the clean-rooms and other facilities shown in the videos were women.


Филми-съкровища, показващи производството в

ДЗУ Стара Загора през 1985 г. и през 1988 г.,  едно от най-мощните предприятия (стопанско обединение) на у
нищожената високотехнологична индустрия на България.

Впрочем, спомням си, че в една лекция на социолог на технологиите, преподавател в ПУ, в клуб "Нещото" в Пловдив, той беше споменал че технологията за производство на CD-та в ДЗУ е била собствена, разработена през 1987 г. (ако не греша). В скорошно предаване пък друг социолог и преподавател в ПУ след посещение в Южна Корея разказа, че когато ги попитал дали знаят за България, му отвърнали, че знаят всичко за нас и преподавали упадъка на България като отрицателен пример - какво не бива да се прави и как една развита държава може да се върне обратно в блатото.

Във филма от 1988 г. се показва част от научно-изследователската лаборатория, в която се мерят "разстояния между два атома", с електронни микроскопи и т.н.

В него от 18-тата минута се споменава, че продукцията се изнася в 18 страни на 4 континента. Има търговски връзки с представителства и сервизи.

Освен социалистическите от СИВ и Куба, още в:

ГФР (ФРГ, Западна Германия)

През средата-края на 90-те години ДЗУ още работеше, имам някъде из къщи брошура от Пловдивския панаир с преносими външни твърди дискове, но вече е отивало към залеза си.  Мисля, че произвеждаха компакт дискове.

После го купиха от унгарската фирма "Видеотон" и както пише коментатор под видеата, продукцията сега е на много по-ниско технологично ниво.

Благодаря на X за линковете и на канала "Pod lipite" за качването на клиповете!

Thursday, October 4, 2018

Numpy "fancy indexing" and scan_P_ debug - discussion on the development and debugging of CogAlg from September 2018

Artificial General Intelligence (AGI) development, debugging, tracking patterns and bugs in Python, trees, tree-traversal, nesting, Pycharm, OpenCV, numpy,  prize for contributions, computer vision.

Numpy "fancy indexing", conditional indices, iterators #8
Twenkid opened this issue on Sep 2 · 55 comments
Scan_P debug #10
Twenkid opened this issue 12 days ago · 56 comments


Saturday, September 8, 2018

Montreal.AI - great source for researchers-oriented news and papers in AI, ANN, Deep Learning

That's the best I've found so far:*F

The amount of publications and progress is amazing. My predictions and "complains" about the exhaustion of art and claims that art and practically everything would be soon generated automatically, thus would become less meaningful and valuable,  is already happening practically (not just theoretically) with a shocking speed of progress with the DNN and GAN.

(Which in their specific implementation and brute force search may not be my choice, though.)

Overall, "the end is near" and the belated ones would find only bones on the table...

So, guys, including myself, work harder and get the best computers and colleagues you can!

Tuesday, July 31, 2018

Encyclopedia of Human-Computer Interaction - Affordances (?В М д с П)

I'd recommend this valuable resource, compiled by a number of researchers in the field of HCI.

It's a good food for thought. The writing may sound too academical, however even if one gets bored or tired by this style, the titles of the chapters themselves and their sequence, the topics and the pictures and tables, the historical exploration of the subjects are suggestive on their own.

Part of  my way to dig and reflect about the AGI includes a sort of HCI and "design" way of thinking for various reasons, for example because that's how the intelligence is manifested and also how it can be monitored and analyzed, it's also connected with the code synthesis line, see recent post.

One of the concepts, which immediately connects my AGI approach with HCI is the "affordances", referred to the American psychologist James Gibson, 1977, 1979, presented in the late chapter 44 of the book.

In my path of thought and study, the concept emerged to me as "What can/could be done" (with a few specifications: 1) what the agent/actor/the will could do itself (and gets aware of as possibilities for action); 2) what could be done anyway in this environment by any possible agent (at maximum resolution of perception and causation), or in my own notation: (?В М д с П, ?В М д П).

This is kind of"obvious", but when coining terms you can focus on them and make them explicit and distinct.

?В М д с П  may be criticized for being too long, why not just "affordances" or "възможности"?

Because it explicitly suggests other important concepts as distinct elements that can be expressed in executable way:

1. Search
2. Possibilities as a set of specific options
3. Will, actor, agent
4. An action, acting, change

Other concepts which I can point at a glance are the visualisations of structure and relations, the "bifocal display" (or multi-focal: different levels of abstraction, different range, different resolution ~ different hierarchical levels of representation or "views"), the way attention travels and how it's attracted and guided when operating an interface, the Gestalt principles. (...)

The body and the environment could be perceived as "interfaces" in switching contexts, different "applications" and the way they are approached may be generalized.

Saturday, July 28, 2018

Rising inequality and AI - a comment on something funny from notes from the AAAI 2018 conference

Seen in Montreal AI.

" His Take: When we reach the place where robots do takeover, what do we do? The concern: “those who own the robots rule the world”.

Traditionalist Response: You see AI robots in the headlines, but not in the productivity or job statistics! Same with computers. Productivity growth in the 2010s is lower than in the last five decades. E/Pop is high, unemployment is low.

Rising inequality began before AI as a result of
measured factors: fall of unions, trade immigration. Dave: Wasn’t clear if it was “fall of trade, fall of immigration”, or “trade, immigration, and fall of unions” (my guess is the former)

But: It’s actually really hard to measure productivity. The nature of productivity changes. If workers are now working more hours and taking longer commutes, it’s different from walking into a building getting clocked and walking out.

(Bold: T.A.)
Q: Why should this time be different?
• Past fears that automation destroys jobs fizzled. FDR blamed the Great Depression joblessness
on failure to “employ the surplus of our labor which the efficiency of our industrial
processes has created”
. US Commission on Automation, Rifkin’s End of Work (1995).

"Those who own the robots..." - has some options in the cited paper.

B) What a way to explain the failure of willingness to distribute the share, the consumption or to *create* jobs by changing the rules and re-thinking the way the profit is distributed.

(Back in the Great depression time there was one famous US politician, a competitor of Roosevelt for the presidential election in 1932?, who had "wrong" ideas and rising popularity and was murdered by the "forces of nature", as one could guess).

A) That sounds like an explanation for children to me. The neoliberal dogma of "trade" - "trade" blah-blah, "free trade", "the market" deciding everything. "The trade" in abstracto makes no sense, though.

What about the neoconservatism-neoliberalism political movement in the 70s, the petrol crisis, the "crisis of democracy" in late 60s-70s, Margaret Thatcher in UK and Reagon in the USA.

The fall (destruction) of USSR and the Eastern Bloc destroyed one of the pressures for the USA/Western Europe systems regarding the laborers.

What about the transfer of the production lines to East and South East Asia - to a much lesser extent in Central and Eastern Europe, which were "too expensive" for the investors.

As of the immigration - it's supported by the opening of the borders for more and for cheaper workers (for higher profit), workers who are willing to work for lower wages - the countries' governments are supposed to decide and help or prevent this, it's not a "natural disaster" as it's suggested to children.

In "democratic" countries those governments are supposed also to ask their citizens as well - I doubt Germans would have agreed with all the immigration they have received from the 40s until the latest decisions of their long-lasting "democratic dictatress".

The immigration from Eastern Europe to Western and USA came: 1) because of the opened borders (in favor of the business in these countries) and also 2) largely because of the quickly destroyed industries after the "liberation from the socialism", see Bulgaria for example, to what its economy turned - from the biggest producer of computers and electronics in the Eastern Bloc and a huge producer of agricultural goods.

(Note that the socialism is known as "communism" in Western Europe and USA (the "imperialist-capitalist countries"), although the rule  was never officially called "communism" by the "communists" themselves, except for the name of the parties. ).

Besides the destroyed industry,  some of the countries national and social structure was smashed by the neoliberal "free" globalized media and political agents/non-governmental organizations applying "ideological diversion".

That has been erasing the national awareness and belongings of the young people, they feel less attached  to their fatherland.


Friday, June 29, 2018

Program Synthesis, Self-Programming, DeepCode - CodeGen - Synthesis of Everything - Казбород

Bulgarian research group in Code Synthesis in Switzerland

Do you know about a trendy research group in Code Synthesis that runs in Zurich, Switzerland? Its leaders are several Bulgarian researchers from ETH with a spin-off start-up called "DeepCode". They use "Big Code", such as GitHub and neural networks, however combined with other methods.

A recent article, an interview witn Martin Vechev, one of the leaders of the DeepCode

(Mein Deutsch ist nicht so gut, ebenfalls, aber... Google Translate from German to English is good enough. :)  )

Older Notes from 5.2.2018 :

A recent University lecture course on Reliable and Interpretable Artificial Intelligence... (Fall 2017) ... taught in ETH Zurich by several fellow bulgarian researchers:

Deep Code, Code Synthesis, AI

Thanks to Ivan Dzheferov for the links!


Todor and Code Synthesis

For the record, I've been in the Code Synthesis domain, too, but it "doesn't count yet" publicly, because it was not in an academic environment and style, it was part of my general AGI notes and research activities and I haven't published it yet, it's not completed as practical yet.

However it aims at "AGI-Complete" code synthesis which is integrated with the "general AGI", language and vision. So far the ideas were not based on DNN, millions-lines of code style, and were more "conceptual", aiming at hitting a lot of targets with a few bullets. For example - by writing a little code and letting it find and write the rest itself.

However I've switched focus. A friend and a developer, Ivan, notified me about that research group some time ago - because he knew my "radical" claims, maybe since 4-5 years*, that I believed that the software development had no future as a domain and profession.

Software should be developed (generated) automatically and certainly not the way it was done either then or now.

It was quite a claim to express in front of developers, so I avoided to do it in public :D, with a few exceptions, since I expected to be ridiculed by people who had no clue. They believed  that "AI"* was a "science fiction", computers "can't do that". "AI" - because even today the "AGI" is not a popular term.
In an interview with Martin Vechev for a Bulgarian media, he mentions AGI at the end as a distant goal with his Bulgarian translation ("Всеобщ изкуствен интелект"; моят термин беше "Универсален изкуствен разум" и др. варианти, както и само "Изкуствен разум" - за да се разграничи от опорочения "интелект").

I'd better focus back. :))

[Article trimmed ... To be continued...]

Wednesday, June 27, 2018

"Delayed gratification" is an ILL-Defined Concept

The article claims that today's children have shown more self control.

I question the abstract settings of that kind of experiments, the generality and the direction of the conclusions.


The setting is ill-defined, the concept of "delayed gratification" itself, too, especially for little children. Too general conclusions. The experimenters assume that the children believe that two candies later are "a bigger gratification" than one. Why should the child do? Why not a child be satisfied with just ONE now and then go do something else, seek for another, different, more meaningful and interesting gratification - such as play? If it's satisfied anyway, why should her wait?

"A higher salary is better" - is it, at what other cost; are the money (or the number of candies) the only unquestionable "gratification" measure.

What if the children didn't understand the question like the experimenter has defined it? What if they waited more, because they were more suggestive and obedient to do what they were told. (It's true that the "schooling" from an earlier age contributes to developing in that direction.)

...Or because they have other rewards, such as "video drugs" and care less about that candy. Or because they could get a candy anyway afterwards and they are not attracted.

Also are those two candies (eaten at once) a bigger gratification? In practice they would be eaten for about the same time - do the little children count that? (If they saved it for another day, that would be a "delayed gratification").

Do the children understand "more" the same way and also *do they believed, that the waiting costs less*, and when they wait, *do they wait because they like more to please the authority figure who gives them the task*?

One thing that the test measures is some "patience", assuming that it's by-definition suffered for "gratification", and namely for the eating of the candy itself.

The article mentions "not on medication for ADHD" and "attention", but I think that's ill-defined too, because the children who don't want to wait for *a candy* may do wait for something else *about which they do care* more and is of a "value" for them, i.e. I don't think the conclusions are transferable by default, *especially* since the child are young and probably do not always generalise themselves.

"Delaying gratification" for a setting of a subjectively accepted higher reward could be interpreted as more "GREED" and for the case of little a few-years-old children: a higher susceptibility to please the authority figure or to answer what she assumed she was expected to say.

Also I suspect that some of the children do not understand all of the conditions of what they were asked to do.


It is true that waiting as Patience and "sustained focus" is correlated to "higher IQ" and other test results in *some tests* - studying and "success" require patience.

It is true also, that pleasing the authority figures in human societies usually leads to "success" in the measures and values, defined by those authority figures.

However patience and "delayed gratification" are correlated also with less ability to contradict the order of the authority figure, which is correlated to less inclination towards critical thinking and creativity under authority pressure.

The rewards for those "delayed gratification" ones is to a bigger extend defined by their authorities and these children may have have accepted and adopted more deeply the values of their "experimenters" and are delaying gratification", they question less the truth of the values they have.

Similarly, some of the children who according to their experimenters "lacked self-control" (based on the experimenters definition), may actually have *REJECTED* or ignored the external control, imposed by the experimenter/teacher and thus they did what they wanted, instead of what they were supposed to do according to the experimenter's values and "gratification criteria" etc.

(I may have encountered similar thoughts in the past).

Wednesday, June 6, 2018

Кръг Artificial Mind | Circle Artificial Mind - AGI, машинно обучение, правене на филми и игри, събития, конференции, изследвания

Интервю с мен за новия ми проект: 

Фокус: изкуствен интелект, универсален изкуствен разум (Artificial General Intelligence, AGI), машинно обучение, правене на филми и игри; спорт, купон и др.

Продължение на опита с "мини-конференцията", която организирах през 2012 г. в Пловдив и "Дружество Разум" от юношеските ми години в началото на 2000-те + филмови, обществени и др. неща от "моя дух".

Историята е драматична и включва разпадналото се сдружение-хакерспейс и споделено работно място "Хакафе" (Hackafe Plovdiv). Споменават се някои от другите ми творчески занимания напоследък. - прочетете в интервюто.

Търсят се "екшън герои" за партньори. Като може би през есента, ако намеря достатъчно партньори, ще организираме втора мини-конференция. Засега един от старата компания от тогава одобри идеята.


Р: (...) Бихте ли разказали повече за "конференцията"?

Т: Разбира се, че думата "конференция" и претенциозното й име са употребени с чувство за хумор. Събирането също мина със смях, както си личи и на снимките и по "хавайската ми риза", но участниците бяха сериозни: Светлин и Даниел сега са докторанти по роботика и изкуствен интелект в Единбург; Орлин стана известен инженер роботист. Имахме моралната подкрепа на Петър Кормушев, тогава ръководител на изследователска група по роботика в Италианския научно-изследователски институт в Геноа, а на следващата година - носител на наградата "Джон Атанасов". Там бяхме още моя милост - авторът на първите два университетски курса по AGI в света, - един от най-добрите ми студенти от втория курс и двама други гости.  (...)

Wednesday, May 23, 2018

Бягство от прекрасния видео свят - статия във в-к "Пловдивски универстиет" от Тош

Излезе в бр. 3-4 в края на април. Благодаря на Тильо Тилев!

Статията ми е на стр. 22.

" Като бях малък, възрастните ни плашеха, че сме щели да си развалим очите от гледане на телевизия. Бях се стреснал, но не можех да спра да гледам, затова се опитвах поне да нама- ля вредата, като мигах и си държах клепачите затворени за по- дълго време. Скоро обаче пак се взирах нормално, защото май нещо ни будалкаха..."

An English title would be: "Escape from the brave video world"

Saturday, February 24, 2018

MIT creates a course in AGI - eight years after Todor Arnaudov at Plovdiv University

Well, good job, MIT - just 8 years after my first course in AGI at Plovdiv Univesity and 7 after the second. I'd like to thank to my Alma mater and prof. Manev and prof. Mekerov, too.  Виж Програмата на курса по Универсален Изкуствен Разум (УИР) на български от страницата на университета, и на следващия курс. Лекции, материали (Lectures, materials - some in English):

MIT's course:

It's open and it seems it has started a month ago in January 2018.

Watch the lectures in Lex Fridman's channel on Youtube.

Me with my best students at the end of the first course:

* The shirt with the caption "Science = Art + Fun" is from my participation at the science communication competition "FameLab Bulgaria" in the previous year ("Лаборатория за слава").

Right, I looked like if I were the youngest student... :D


( Edit: I noticed I've first written "seven years since the first", but the first one was in 2010. So it's almost 8 years - spring 2010)

Friday, February 16, 2018

Тошко 2.070 - управление на синтезатора на реч чрез Python и PHP | Toshko 2.070 speech synthesizer

Новата версия на синтезатора за желаещи да експериментират слуша за команда с изказване, записва изказаното в mp3-файл и го връща на клиентското приложение, което решава дали и как да го просвири. Примерният скрипт на Python извежда двоичните данни в конзолата, а този на PHP ги записва на диска и просвирва файла.

Изтегли Тошко 2.070

Сайт на Тошко 2


1. Тошко 2.070 (EXE) -  нов изпълним файл.

2. Скриптове на Python2.7

Папка /scripts

Може да се наложи да инсталирате и няколко библиотеки: Installing python modules


Ако автоматичното настройване на пътищата е наред и не ползвате и други версии (напр. Python 3.x), то изписването на "python" в конзолата в папката на скриптовете би трябвало да извика интерпретатора:

> python

Ако не се получава и не ви се работи с настройки на PATH/Environment, задайте пълния път:

('Thu Feb 15 17:57:20 2018', 'Toshko POST Server Starts - localhost:8079') - POST-сървър -  Изпращане на съобщение с изказването към синтезатора

Отворете и настройте пътя до папката, където синтезаторът извежда mp3-файловете

Например, ако сте инсталирали програмата в  C:\\program files\\toshko2\\
то конфигурирайте като допишете пътя:

mp3Path = "C:\\program files\\toshko2\\mp3\\"
(Важно - ползвайте двойни наклонени черти).

3. Скриптове на PHP
Папка /scripts


Нужно да имате включен PHP-сървър, напр. чрез WAMP.

Поставете файловете в подходящата папка на сървъра, напр. C:\WAMP\WWW\WEB

След това скриптовете се извикват през четец (браузър):


Правих тези тестове преди около 2 години, но тогава не ги публикувах, защото в този вид изискват технически донастройки, и тъй като не е както би трябвало да бъде. Сега просто се праща текст, настройките се правят само от приложението.

Желателното положение е дистанционното управление да контролира говорния апарат във всички детайли, така че да може и обработката на текста - ударения, паузи, фрази, анализ и пораждането на интонационни контури и пр. да се изведат в по-лесен за промяна вид, например на Питон.

За съжаление не ми се занимаваше да го продължа тогава -  "някой ден".

 called do_POST?
tosh-PC - - [15/Feb/2018 17:24:17] "POST / HTTP/1.1" 200 -
{'@consonants': ['0.5'], '@speed': ['1.0'], '@say': ['\xd0\x98\xd1\x81\xd0\xba\x
d0\xb0\xd0\xbc \xd0\xb4\xd0\xb0 \xd0\xba\xd0\xb0\xd0\xb6\xd0\xb0 \xd0\xbd\xd0\xb
5\xd1\x89\xd0\xbe...'], '@vowels': ['2.0']}
['\xd0\x98\xd1\x81\xd0\xba\xd0\xb0\xd0\xbc \xd0\xb4\xd0\xb0 \xd0\xba\xd0\xb0\xd0
\xb6\xd0\xb0 \xd0\xbd\xd0\xb5\xd1\x89\xd0\xbe...']
Before say1251 = ...
BUSY... Communicating with Toshko...

before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [58, 58, 58, 102, 44, 49, 52, 44, 48, 46, 53, 44, 55, 44, 50, 46, 48,
59, 36, 36, 36, 112, 121, 116, 104, 111, 110, 83, 97, 121, 52, 51, 56, 51, 57,
50, 54, 59, 10, 200, 241, 234, 224, 236, 32, 228, 224, 32, 234, 224, 230, 224, 3
2, 237, 229, 249, 238, 46, 46, 46])
OK! READY for new requests



before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [36, 36, 36, 107, 117, 114, 49, 46, 109, 112, 51])
before: char_buffer = array.array(, binascii.a2b_qp(say))
before: char_buffer
array('B', [36, 36, 36, 197, 245, 238, 238, 238, 46, 46, 46, 32, 195, 250, 231,
32, 227, 238, 235, 255, 236, 32, 46, 46, 46, 49, 50, 51, 52, 53, 54, 55, 56, 57

Monday, February 5, 2018

Sensori-Motor Grounding of Surface Properties - an Exercise Trace of the Thoughts and Tree of Questions by Todor from 2011 in AGI List

After the selected emails from 2012 where I discussed generalization and the real meaning of "invariance" in 3D, I'm sharing another selected letter from the AGI list on general intelligence and sensory-motor grounding and its connection to symbolic/abstract representation. The "glue" of this to a real system is the specific processing environment, which applies the sensori-motor mapping and gradually traverses the possible actions ("affordances") within a specific input space (a universe, an environment) and maps them to the sensori data hierarchically with incremental complexity. It should gradually increase for example the number and the range - e.g. involving more modalities of input and output (action), wider range in space and time - of the parameters defining a particular current "highest complexity" conception, which in the example below are eventually represented as words ("a house", "a surface", ...).

The system's "motor" should be "ignited" to explore and the exploration should generate the higher level representations out of the simple sensory inputs like the ones explained below.

Note that the learning - the inductive, discovery - process starts from the end of this "trace of the thoughts". The reasoning was to show that it is possible and even easy/obvious to introspectively trace it from the general conceptions down to the specific and how "low complexity" these abstractions actually were.

See also:

Todor's: "Chairs, Buildings, Caricatures, 3D-Reconstruction..." and that semantic analysis exercise back from March 2004 Semantic analysis ...

Kazachenko's "Cognitive Algorithm" which claims to incrementally add "new variables".

from Todor Arnaudov twenkid @ ...
date Sun, Sep 11, 2011 at 3:12 AM
subject Re: [agi] The NLP Researchers cannot understand language. Computers could. Speech recognition plateau, or What's wrong with Natural Language Processing? Part 3
mailed-by (....)

IMHO sensorimotor approach has definitely more *general* input and output - just "raw" numbers in a coordinate system, the minimum overloaded semantics.

[Compared to "purely symbolic". Note that sensori-motor doesn't exclude symbolic - this is where it converges after building a sufficiently high or long hierarchy (inter-modal, many processing stages, building an explicit discrete dictionary of patterns/symbols) and when it aims at "generality", "compression" or partially arbitrary point of view of the evaluator who's deciding whether something is "symbolic". The way sensori-motor data is processed may also be represented "symbolically", "mathematically" (all code in a computer is supposed to be). The "not symbolic" sense is that it's aimed to be capable of mapping the structure of the emerging conceptions, correlations, "patterns" ("symbols"...) to a chain or a network, or a system of discoveries and correlations within a spatio-temporal domain in the "general" "raw sensory input" from the environment, or one that can be mapped to such input. On the other hand the "purely symbolic" combinations have no explicit connection to that kind of "most general" "raw input". Note, 7.1.2018]
That way the system has higher resolution of perception and causality/control (my terms), which is how close the output/input can be recovered to the lowest laws of physics/properties of the environment where the system acts/interacts.

I think "fluidity"/"smoothness" that Mike talks about is related to the gradual steps in resolution of generalization and detail of patterns which is possible if your start with the highest available sensory resolution and gradually abstract details while keeping relevant details at relevant levels of abstraction, and using them on demand when you need them to maximize match/precision/generality. When system starts from too high an abstraction, most of the details are gone.

[However, that's not that bad by default, because what remains is the most relevant - the spaces of the affordances are minimal and easily searchable in full, even introspectively. See below. Note, 5.2.2018]

BTW, I did this little exercise to trace what really some concepts mean:

[Starting randomly from some perception or ideas, thoughts and then the "Trace of the thoughts" process should converge down to the basic sensori-motor records and interactions from which the linguistic and abstract concepts have emerged and how.]...

What is a house?
- has (door, windows, chairs, ... ) /

What is a door?

has(...)... //I am lazy here, skip to few lines below...

is (wood, metal, ...)

What is wood?

is(material, ...)

What is material?

What is surface?

What are material properties?

-- Visual, tactile; weight (force); size (visual, tactile-temporal, ...)

has(surface, ...)

is(smooth, rough, sharp; polished...)

What are surface properties? //An exercise on the fly

- Tactile sensory input records (not generalized, exact records)

- Visual sensory input -- texture, reflection (that's more generalized, complex transformations from environmental images)

- Visual sensory input in time -- water spilled on the surface is being absorbed (visual changes), or it forms pools

-- How absorption is learned at first?

---- Records of inputs, when water [was] spilled, the appearance of the surface changes, color gets darker (e.g. wool)

-- How not absorbing surface is discovered?

---- Records of inputs, when water spilled, appearance of the surface changes; pool forms
------  [pools are] changes in brightness, new edges in the visual data [which are] marking the end of the pools

-- How is [it] learnt that the edges of the pools are edges of water?
---- [By] Records of tactile inputs -- sliding a finger on the surface until it touches the edge, the finger gets wet

-- What is "wet"?

---- Tactile/Thermal/Proprioception/Temporal records of sensory input:

---- changing coordinates of the finger

---- finger was "dry"

---- when touching the edge:

------ decrease in temperature of the finger [is] detected

-- when [the "wet"] finger touches another finger, ... or other location, thermal sensor indicates decrease of other's temperature as well

-- when [the] finger slides on the surface when wet, it feels "smoother" than when "dry"

[What is "smoother"?]

-- "Smoother" is - Temporal (speed), proprioception + others

-- The same force applied yields to faster spatial speed [that maps to "lower friction"]

[What is "faster [higher] speed"?]

-- "Faster"[higher] speed is:

---- [When] The ratio of spatial differences between two series of two adjacent samples is in favor of the faster.

-- The friction is lower than before touching the edge of the pool.

[What is "friction"?]

-- Friction is:

-- Intensity of pressure of receptor, located in the finger.

Compared to the pressure recorded from other fingers, the finger which is being
 sliding measures higher pressure than the other fingers


So yes, it seems we can define it with NL [Natural Language], but eventually it all goes back down to parameters of the receptors -- which is how it got up first. Also we do understand NL definitions, because we *got* general intelligence.

An AGI baby doesn't need to have defined "wet" as "lower temperature" etc. -- it just touches, slides a finger etc. keep the record, and generalize on it.

Then it associates it with the word "wet" which "adults" w (....)

--- THE END ---

Thursday, January 4, 2018

The lack of operational hierarchical structure in the Deep Learning ANN neural networks

A survey paper on the issues of Deep Learning by Gray Marcus:

The author has a valuable mix of expertise both in the ANN development and in linguistics, developmental psychology, cognitive psychology.

There are good points on the lack of real hierarchical structure in (current/regular) DL/ANN, accenting that they are actually "flat", even though there are "layers" which gives a confusing impression.

"To a linguist like Noam Chomsky, the troubles Jia and Liang documented would be
unsurprising. Fundamentally, most current deep-learning based language models
represent sentences as mere sequences of words, whereas Chomsky has long argued that
language has a hierarchical structure, in which larger structures are recursively
constructed out of smaller components"

G.Marcus p.9

See also Jeff Hawkins point since 2004 "On Intelligence"/Numenta, Dileep George's Vicarious, Boris Kazachenko's "Cognitive Algorithm"; the old Hierarchical Markov Models; probably many other researchers, also myself since my early 2000s writings where even I as a teenager have realized that human general intelligence faculty is a Hierarchical simulator and predictor of virtual universes.

The ANNs (without being put in another system/organization) lack operational structure.

Good survey and discussion of areas where DL fails and emphasis of the lack of transfer of learning, i.e. that the networks are not general intelligence and don't "understand" the concepts (the "overattribution" for DeepMind's Atari-player discovery of "tunnels", see p.9)

* However I don't like the pretentiousness in some parts of the article while discussing trivialities and proposing alternatives with 15+ years(?) delay with a pinch of academic glamour or so.

E.g. unsupervised learning (not that common boring classification of fixed images) and self-organization, incremental complexity/"self-improvement" - "Seed AI"; hierarchical operational structure, "symbol grounding" - emergence of generalizaions/"symbols", "abstract thought" from the sensory processing; different levels of abstraction - including "symbolic"; causality understanding (prediction, simulation of "virtual universes"); general/universal game playing; application of general educational tests/measures... (Since AGI is about that; the term "human level (general) (artificial) intelligence" was used in the past) etc. (Not just "pattern matching" of synthetic static tests.)

The above is what AI was always supposed to be about - AGI, - at least as some talented teenagers and others realized and shouted it to the world in the early 2000s and dismissed the poisoned term "AI". Everything was called "AI" back then - somewhat similar today, AI is ubiquitous, yet not general and lacking a personal wholeness.

These suggestions and conclusions would be informative for hard-core AI-er, though (programmers-mathematicians type), it seems the "general"-... part has still a way to go as a concept for the "mainstream" developers community with its "Narrow AI" attitudes**.


** Narrow AI - another forgotten term, which on second thought is still actual. Current DL is in fact "narrow AI", each network is trained for a specific class of problems ("classification") and as well explained in the paper can't generalize concepts and transfer the knowledge to different domains.

*** I "don't like" my own pretentiousness, too, but I consider it funny and ironic, rather than serious like in the paper. :P

**** Thanks to Preslav Nakov for sharing the link!


Compare the educational test proposals with one of the first articles in this blog, a decade ago:

Wednesday, November 14, 2007
Faults in Turing Test and Lovelace Test. Introduction of Educational Test.

I didn't explicitly defined the exact kinds of tests, because they were already given in details in the appropriate textbooks about the set of the respective expected skills and knowledge for the respective age or educational level.


The article reminds me of the series of articles "What's wrong with Natural language processing?", starting from the year 2009:


Vicarious' demo video summarizing ANN reinforcement learning faults, and their Schema Networks: