Saturday, September 7, 2019

// // Leave a Comment

Tournament "Plovdiv" 24.8.2019 - Long Jump and Triple Jump with Alexandra Nacheva and Momchil Karailiev

Track and Field Athletics Tournament in Plovdiv that I was a camera operator and editor. The most interesting were maybe two jumps of the world champion in Junior's Triple Jump Alexandra Nacheva in Long Jump of 6.52-6.53 (foul) and another which was measured at 6.32, but had potential for about 6.50 as well. 
Думи:   Александра Начева, Момчил Караилие, Пловдив, Лекоатлетически, турнир, Лека атлетика, Георги Цонов, Plovdiv, Marian Oprea, Kolokytha, Alexandra Nacheva, Triple Jump, Long Jump, Bulgaria, Plovdiv, Athletics, Track and field, 24.8.2019, Georgi Tsonov
Read More

Friday, August 2, 2019

// // Leave a Comment

Видео разказ за първия университетски курс по Универсален изкуствен разум в света - Пловдив 2010-2011



Screencast Video in English (slightly different, shorter content): https://artificial-mind.blogspot.com/2019/07/agi-course-plovdiv-2010-screencast.html

Увлекателен видео разказ на български за първенството на пловдивския интердисциплинарен университетски курс и програмата по обучение по Универсален изкуствен разум (Artificial General Intelligence, AGI), които изпревариха с осем години (!) първия курс с подобна философия на MIT, Масачузетския технологически институт (МТИ) в Бостън и също с толкова години интердисциплинарния институт на "Станфорд", два от водещите университети в изкуствения интелект. Забавни съвпадения между автора на курса в Пловдив - Тодор - и колегата и автор на курса в МТИ Лекс Фридман - занимания, музика, спорт, хумор. Този клип е по-добре направен от версията на английски, с по-висококачествено видео и звук и с преведен откъс (с дублаж) на интрото на курса на колегата в Бостън. Виж съдържание от описанието в канала. Screencast Video in English (slightly different content): https://artificial-mind.blogspot.com/2019/07/agi-course-plovdiv-2010-screencast.html ФМИ Пловдив - 2010 г. + 8 години МТИ Бостън, САЩ - 2018 (курс по УИР) Стабфорд, САЩ - 2018 (Институт за изкуствен интелект за човека - Human Centered AI) Думи: Пловдивски Университет, Универсален изкуствен разум, Видео, История, Забава, За Тош, Artificial General Intelligence, Academia, AGI, Lectures, Electric Guitar + университет


Read More

Tuesday, July 23, 2019

// // Leave a Comment

Screencast Story About the World's First University Course in AGI in 2010 in Plovdiv - Artificial General Intelligence



Виж на български - друга, по-добра версия на разказа.

A short story about the world's first interdisciplinary University course in Artificial General Intelligence (AGI), presented at Plovdiv University in 2010 and 2011.

See also the strange funny personal coincidences between me, the author of the first course, and the author of the MIT's course, starts around 5-th minute.
Read More

Saturday, July 13, 2019

// // Leave a Comment

BAIHUI.AI - нов български стартъп в изкуствения интелект | BAIHUI.AI - New Bulgarian AI Start-up

През юни няколко колеги са създали стартъп за изкуствен интелект с това дръзко българско име с китайско звучене. :) Засега показват, че са успели да подкарат GPT-2 преобразувател (Transformer), обучен върху български текст с 1.5 милиарда променливи, предполагам през Амазонския AWS, не са толкова богати че да имат необходимия хардуер за друго обучение.

Пуснали са и кратка анимация, прилича на StyleGAN (виж www.thispersondoesntexist.com).

И двете системи са с отворен код, първата е от OpenAI, втората е от NVidia.

https://www.linkedin.com/company/baihuiai

BAIHUI.AI is a Bulgarian AI start-up, founded in June 2019. So far they've demonstrated they managed to train and run the big GPT-2 1.5 Billion parameters NLP model on Bulgarian texts, as well as probably StyleGAN (it seems so): thispersondoesntexist.com.
Read More

Sunday, July 7, 2019

// // Leave a Comment

MIT's Interdisciplinary Billion Dollar Computing College - 9-10 years after the interdisciplinary program of Todor at Plovdiv University Etc.

Comments regarding a recent talk from Lex Fridman's AI podcast with Jeff Hawkins from Numenta:

https://youtu.be/-EVqrDlAqYo
Conceptually it seems Hawkins'  approach and ideas still match many of the insights and direction in my "Theory of Universe and Mind"  works from the early 2000s (published before "On Intelligence" in "Sacred Computer" - "Свещеният сметач") and afterwards.

In addition to:  not following the mainstream research which is doing minimal changes which lead  to minimal progress of some benchmarks which is assumed good enough by the mainstream researchers, published in journals and conferences etc. Rather radical jumps are needed and recently "even the godfathers in the field agreed"...

The building of deep structures, the play of resolution of perception and control, coordinate spaces as a basis for AGI  ("reference frames"), attention traversal of different scales ("time scales"  - resolution of perception and causation within the time dimension), introspection as a legitimate method for AGI research, these are my "behavintrospective" studies; that there is no separate training and inference stage as in current NN, it's supposed to be a part of one process. See CogAlg.

One difference though - he dismisses the interdisciplinary research as helpful (although I think they actually do such research). "Human-centered AI" is disliked, because it suggests study of emotions and other human traits which are not needed for the AI, "let's just study the brain etc.".

IMO the interdisciplinary minds see and understand easier shortcuts while others could eventually find these paths by laborious digging and wandering in seas of empirical data and brute-force search.

https://youtu.be/-EVqrDlAqYo?t=5141

~ 1:25 h

"The new steps should be orthogonal ..."  (no little changes "1.1% progress" on standard benchmarks - see Todor's "What's wrong with Natural Language Processing" series)

1:25:41

The Billion dollar computing college of MIT, interdisciplinary, from this fall:
http://news.mit.edu/2018/mit-reshapes-itself-stephen-schwarzman-college-of-computing-1015

https://fortune.com/2018/10/16/mit-college-computing-artificial-intelligence-billion-dollars/


Well: 9-10 years after the course/research direction program that I announced in late 2009, presented in spring 2010 at Plovdiv University, with practically zero funding, besides the opportunity to present it at my University (thanks to Mancho Manev and Plovdiv University).





Read More

Saturday, July 6, 2019

// // Leave a Comment

Shape Bias in CNN for Better Results due to Wrong Texture Bias by Default

In its intro the authors of the paper below explain that it's been a common believe in the CNN community that the ImageNet trained neural networks developed a "shape bias" and stored "shape representations",  they propose a contrary view, that CNN are texture-biased and prove it with experiments:

IMAGENET-TRAINED CNNS ARE BIASED TOWARDS TEXTURE; INCREASING SHAPE BIAS IMPROVES ACCURACY AND ROBUSTNESS

To me that texture-bias has been obvious and obviously wrong. The CNNs recognise texture-features and search correlations between them, otherwise there wouldn't be adversarial hacks with changing a pixel and ruining recognition, it wouldn't need to be trained with so many examples, it would recognize wireframe drawings/sketches as humans do etc. etc. 

The "right" recognition would be robust if the system can do 3D-structure-and-light reconstruction ("reverse graphics"), at best incrementally, see: 


CapsNet, capsules, vision as 3D-reconstruction and re-rendering and mainstream approval of ideas and insights of Boris Kazachenko and Todor Arnaudov, Sunday, December 31, 2017



Colour Optical Illusions are the Effect of the 3D-Reconstruction and Compensation of the Light Source Coordinates and Light Intensity in an Assumed 2D Projection of a 3D Scene, 1.1.2012  

2012, discussions at AGI List:
AGI Digest: Chairs, Caricatures and Object Recognition as 3D Reconstruction






Developmental Approach to Machine Learning, Dec 2018
https://artificial-mind.blogspot.com/2018/12/developmental-approach-to-machine.html



News: Mathematics, Rendering, Art, Drawing, Painting, Visual, Generalizing, Music, Analyzing, Tuesday, September 25, 2012


[Topology, Vector Transformations, Adjacency/Connectedness...]


https://artificial-mind.blogspot.com/2012/09/news-mathematics-rendering-art-drawing.html


"...Vector transformations

In another "unpublished paper" from a few months ago, which would turn into a digest one day eventually (it's a published email discussion), I explained and shared some elegant fundamental AGI operations/generalizations which are based on simple visual 3D transformations. 

"Everything" is a bunch of vector transformations and the core of the general intelligence are the simplest representations of those "visual" representations, which are really simple/basic/general. 

And "visual" in human terms actually means just:

Something that encompasses features and operations in 1D, 2D, 3D and 4D (video) vector (Euclidian) spaces, and the vectors in these dimensions can be of dimensionality usually of up to 4 or 5, such as: //e.g. (Luma, R,G,B)

1D - luminance
2D - luminance + uniform 1D color space
3D/4D - luminance + splitted/"component" color space

+ Perspective projection, which is a vector transform, it can be represented as a multiplication of matrices - that is - the initial sources of visual data are of higher dimensionality than the stored representation, 3D is projected into 2D (a drawback of the way of sensing)/

Also, of course, there is topology, humans work fine with blended and deformed images - curved spaces, and curves, not simple linear vectors. However the topology is induced from the basic vector spaces, the simplest topological representation is just the adjacency of coordinates in a matrix.

The above may seem obvious, but the goal is namely to make things as explicit as possible...." 

+

Sunday, April 1, 2012


https://artificial-mind.blogspot.com/2012/04/jurgen-schmidhuber-talk-on-ted-creative.html
"Todor:  And it takes many months to get to 3D-vision and to increase resolution and develop 3D-reconstruction in the human brain. That adds ~86400 fold per day and 31,536,000 "cycles" per year.
What computing power is needed?

I don't think you need millions of the most powerful GPUs and CPUs at the moment to beat human vision, we'll beat it pretty soon, a lot of the higher level intelligence in my estimation is very low at its complexity (behavior, decision making, language at the grammar/vocabulary levels) and would need a tiny amount of MIPS, FLOPS and memory. It's the lowest levels which require vast computing power - 3D-reconstruction from 2D one or many static or motion camera sources, transformations, rotations, trajectories computations etc., and those problems are practically being solved and implemented...." 
Read More

Wednesday, July 3, 2019

// // Leave a Comment

Cognitive Science's Failure to Become an Interdisciplinary Field - the Multi-Interdisciplinary Blindness

Discussion of mine regarding the paper:

"Perspective | What happened to cognitive science?,  10 June 2019


https://www.nature.com/articles/s41562-019-0626-2?fbclid=IwAR2mMzO4qzIKINXT2BUcMZAS1ZI4PONd04SHQeF7FLgvH5VTc1pwR4DcLow

Available on Github: https://github.com/rdgao/WH2CogSci/blob/master/nunezetal_final.pdf?fbclid=IwAR17HIosUS-7EKdFT4a--SesVuwb3aPp-1a0yE4Rbk_q8io8w5S6nFVrWvY

https://www.facebook.com/groups/RealAGI/permalink/1230556097152988/

IMO a big share of the problem lays in the whole researchers's and overall intellectual direction in the academic circles (and power-and-profit driven societies, modern slavery). It is narrow knowledge and world view, specialization is promoted, ones who obey and execute instructions of their superiors grow the ladder and become leaders, doing the same. The creative, wide-minded and really original ones are not leaders of the research.*

That is related to multi-interdisciplinary blindness, related to insufficient working memory capacity and faculties for understanding and representing the inputs generally enough so that one can encompass the concepts from different domains and contexts and think of them together.

BK calls it too simply "depth of structure".



* In the past there were exceptions, such as Alan Kay
** That survey paper reminds me of my cycle "What's wrong with Natural Language Processing" some 10 years ago, because to me NLP/Computational Linguistics should have been a part of the AGI, not what they were.

** Sorry for the sick formatting, I had to write it in external editor etc., this one is annoying, but not now.

See elaborate related discussions:

#1

Circa 2009-2010 - series of 3 "perspective" articles



What's wrong with NLP, part I:

http://artificial-mind.blogspot.com/2009/02/whats-wrong-with-natural-language.html

Monday, March 23, 2009


http://artificial-mind.blogspot.com/2009/03/whats-wrong-with-natural-language.html



Note: now in NLP there are impressive results in NLG (generation), BERT etc. with such "mindless" vector representations, using current methods of machine learning, convolutions, "transformers" etc. however  it probably more or less emulates virtual sensory-motor interactions - by traversing and comparing huge corpora and how different texts/segments (mappings of sensory records) map and relate to each other, what's reasonable in what context. It is more advanced than as it was in the earlier simple frequency-based representations and inverse-frequency - frequency/probability of a token in current document, compared to average in the other documents etc.

#2 Friday, January 1, 2010

I will Create a Thinking Machine that will Self-Improve 

An Interview with Todor, "Obekty" magazine, issue November-December 2009

http://artificial-mind.blogspot.com/2010/01/i-will-create-thinking-machine-that.html   


- Where does the researchers' efforts should be focused in order to achieve Artificial General Intelligence (AGI)?

First of all, research should be lead by interdisciplinary scientists, who are seeing the big picture. You need to have a grasp of Cognitive Science, Neuroscience, Mathematics, Computer Science, Philosophy etc. Also, creation of an AGI is not just a scientific task, this is an enormous engineering enterprise – from the beginning you should think of the global architecture and for universal methods at low-level which would lead to accumulation of intelligence during the operation of the system. Neuroscience gives us some clues, neocortex is “the star” in this field. For example, it's known that the neurons are arranged in sort of unified modules – cortical columns. They are built by 6 layers of neurons, different layers have some specific types of neurons. All the neurons in one column are tightly connected vertically, between layers, and are processing a piece of sensory information together, as a whole. All types of sensory information – visual, auditory, touch etc. is processed by the interaction between unified modules, which are often called “the building blocks of intelligence”.

- If you believe that it is possible for us to build an AGI [Since you do believe], why we didn't manage to do it yet? What are the obstacles?

I believe that the biggest obstacle today is time. There are different forecasts, 10-20-50 years to enhance and specify current theoretical models before they actually run, or before computers get fast and powerful enough. I am an optimist that we can go there in less than 10 years, at least to basic models, and I'm sure that once we understand how to make it, the available computing power would be enough. One of the big obstacles in the past maybe was the research direction – top-down instead of bottom-up, but this was inevitable due to the limited computing
(...)


#3

Tuesday, August 27, 2013

Issues on the AGIRI AGI email list and the AGI community in general - an Analysis

https://artificial-mind.blogspot.com/2013/08/issues-on-agiri-agi-email-list-and-agi.html
"- Multi-intra-inter-domain blindness/insufficiency [see other posts from Todor on the list] - people claim they are working on understanding "general" intelligence, but they clearly do not display traits of general/multi-inter-disciplinary interests and skills.


[ Note: Cognitive science, psychology, AI, NLP/Computational Linguistics, Mathematics, Robotics … – sorry, that's not general! General is being adept, fluent and talented in music, dance, visual arts (all), acting, story-telling, and all kinds of arts; in sociology, philosophy; sports … (…) ... + all of the typical ones + as many as possible other hard sciences and soft sciences and languages, and that is supposed to come from fluency in learning and mastering anything. That's something typical researchers definitely lack, which impedes their thinking about general intelligence. ](...) "

Discussion:
http://artificial-mind.blogspot.com/2014_08_05_archive.html



Tuesday, August 5, 2014


#4

The Super Science of Philosophy and Some Confusions About it - continuation of the discussion on the "Strong Artificial Intelligence" thread at G+

Read More

Tuesday, July 2, 2019

// // Leave a Comment

SuperCogAlg & CogAlg Frame Blobs Visualisations | СуперКогАлг - алгоритъм за Универсален Изкуствен Разум

More recent visualisations from June, a completed primary bottom-up segmentation in the C++ version. Now working on deeper structures. I'm looking for partners and cofounders.

По-скорошни снимки от работата на прототипа на алгоритъм за универсален изкуствен разум, надграждащо се машинно обучение без учител (unsupervised learning). Засега изглежда като компютърно зрение и обособяване и разделяне на части ("клъстериране" и сегментация). "СуперКогАлг"* е на С++, за разлика от системата от която е разклонение - CogAlg, която е на Python. 

*Супер... Има и друго име, но ще го обявя по-късно - засега така ми хрумна заради "Супер Контра", откъдето е кадърът по-долу.












Illustration of the scanning, segmenting and merging process - so called blob formations, the first level of the 2D-version of CogAlg.




Read More

Monday, June 10, 2019

// // Leave a Comment

SuperCogAlg Blob Formation Interactive Inspection Videos


Clustering (segmentation) with my current implementation up to "blobs". Still a work in progress, but interactivity is growing.

It's not yet optimized, but the rendering of the items is deliberately slow in order the relations between them to be visible during drawing. The text part resembles PDF file rendering in the past on a slow PC with poor graphics card.

In the video games line, the first example resembles Conway's "Game of life", even cells with nuclei appear. :)





One of the Plovdiv's hills: (photo: mine)




An image from "Super Contra" NES video game (by Konami)



Web Text, big fonts (from the Vanilla CogAlg's issues)


The next step in the project besides debugging is the so called intra-blob formation - incremental segmentation within the ranges of each blob using different comparison criteria. I'll also be adding more interactivity etc.




More examples





Segmented frame from Konami's "Super Contra"

The beach in Samos, the island of Pythagoras. Original photo: MK.



Super CogAlg
...
Raw video links:

https://youtu.be/KPHYL_m8eLw
https://youtu.be/vgtfiUssTmk
https://youtu.be/D-BI65oFyYs
Read More

Saturday, June 1, 2019

// // Leave a Comment

Super CogAlg - Segments-Blobs

Call for partners and co-founders

A processed frame from the video game "Super Contra" and some segment-blob-like structures*, looking like Tetris, rendered by my work-in-progress implementation and environment for general incremental pattern discovery. Currently it follows some of the basic comparisons of  the "Vanilla" version of CogAlg, but not exactly and probably will diverge and extend, since it's not supposed to be just a port.

It is also supposed to be interactive and "playable".

"The spirit of the video games" - now in color. :)

The rocket-picture by the Vanilla CogAlg displays more advanced structures, though.


* segments and blobs - see in CogAlg


Edit: 7-6-2019

Still a work in progress:




....


Read More

Friday, May 17, 2019

// // Leave a Comment

Call for Co-founders of an R&D startup in AGI & Code Synthesis

Last update: 21.6.2019

I am looking for partners for creating a Code Synthesis and Artificial General Intelligence R&D company. Short term goals and domains include Computer Vision, Generative-Analytical Meta-Systems Research and Development, Code Synthesis, Automatic Software Development and Grounded Interactive Sensori-Motor 
Machine Learning and Programming.

The AGI methods as of current plan would include realization of the general directions of the theory I've proposed in my early works and the world's first University course in AGI, which conceptually is related to CogAlg and at high level also to the so called "Memory-prediction framework" and the "Hierarchical Temporal Memory", as well as the so called "Active Inference".

In short, a hierarchical sensori-motor predictive simulation of virtual universes, created and adjusted by interactive exploration of the world (input space).

I also plan to specify and apply in practice my yet undisclosed observations and generalizations in the domains of Developmental Psychology and Theory of intelligence/knowledge/learning, assisted by interdisciplinary studies and multi-domain creative activities. This body of insights is growing on-the-go as well.

Another short-term target is analysis, modeling and accelerated and high-performance development of "CogAlg", a project to which I've been a contributor, because its core ideas of incremental predictive modeling resembled the ideas of my own theory. Currently the official version of this algorithm is progressing slowly and it's developed in Python.

Updates:

Check newer blog publications

  • 24.5.2019: Lately I've been working on a high performance implementation and testing ideas from CogAlg.
  • 9.6.2019: Implementation of basic blob formation and display. Work in progress, but with interactivity.
  • 21.6.2019: Frame-Blob stage debugged, C++.

    Contact me: http://research.twenkid.com

  • Read More

    Sunday, May 12, 2019

    // // Leave a Comment

    Ghosts of Sprites of Classic Video Games in CogAlg Output

    The algorithm is not practical yet, but in an output last month it started to capture the rough shape of a rocket in the sky and produced funny artifacts on the even background. Thanks to Khan for debugging the then-current version to a point of running.

    The spirit of the sprites of early video game systems has reincarnated in the debug pictures, displaying the coverage of the basic 2D-patterns, found by the algorithm, so called "blobs".

    First image - the spirit of Pac-Man, the ghosts from "Pac-Man" and other monsters.





    Below, the bushes and the trees of the classic early 80-ies games "Bug attack"/"Centipede"(similar), pirated for Pravetz-82/M as "Нашествие".








































    A picture of  a cat's face, resembling Space Invaders or an "astronaut" (or an alien) vs a snail's eyes alien.





    The algorithm turned out as a pre-generator of levels for classic video games...

    The first milestone is achieved!... ;)


    Read More

    Thursday, April 18, 2019

    // // Leave a Comment

    Collection of Best papers in Computer Science conferences since 1996

    An interesting historical resource and one for getting general insights on the trends - good job for the authors of the list. I notice that it's missing the AGI conference though, if it counts as CS.

    https://jeffhuang.com/best_paper_awards.html?fbclid=IwAR0Yc_0K689o8OhmfB0oUI5VKcWg6bg3MuG95yHgcyIc8TU5ItkMw4IzKj4

    Thanks to the source: https://www.facebook.com/groups/MontrealAI/permalink/631419193986583/
    Read More

    Sunday, April 14, 2019

    // // Leave a Comment

    Neural Networks are Also Symbolic - Conceptual Confusions in ANN-Symbolic Terminology

    Todor's remark to:
    G.H. - On the Nature of Intelligence  on Creative Destruction Lab
    Публикуван на 1.11.2018 г.
    This talk is from the Creative Destruction Lab's fourth annual conference, "Machine Learning and the Market for Intelligence", hosted at the University of Toronto's Rotman School of Management on October 23, 2018.
    https://youtu.be/MhvfhKnEIqM

    The Hinton's message that there should be processing at different time scales is correct, but yet known and obvious. In my "school" it's about working at different resolutions of perception and causation-control, it is also in Hawkins's, in Friston's, in Kazachenko's CogAlg.

    I'd challenge the introduction in a funny way though, although I realize that the context is in fact of a silly pitching session and it is perhaps with a clear purpose: note the title "Market for Intelligence" and "Management". The message is "don't fund  symbolic methods, fund mine". 

    The division symbolic and "vector" seems sharp, but semantically it's defined on confused grounds and confused concepts


    With "symbolic" they apparently mean some kind of "simple" ("too obvious" or "fragile") logical programming like "Prolog" or something like that and something with some kind of "short code" or "obvious" and "simple" relations, lacking hidden layers, or shallow etc. that "doesn't work".

    While with "non-symbolic" they address something with "a lot" of numerical parameters and calculations which are not obvious and also, likely, are not understood - by the designers as well. Ad-hoc mess for example, and the job of  "understanding" is done by the data themselves and the algorithm.

    That doesn't make them "not symbolic" though, even in that messy state they are

    Let's investigate one Supervised Tensorflow Convolutional NN or whatever high performance library.

    The user Tensorflow code is developed in Python (symbolic formal language).

    The core is developed in C/C++?, CUDA/GPU (C/C++) - pretty symbolic, abstract and considered "hard".

    Data is represented as numbers (sorry, they are also symbols, reality is turned into numbers by machines and by our mind).

    The final classification layer consists of a set of artificial labels - symbols - which lack internal conceptual structure, except for the users - humans, - who at that level operate with "symbols" - abstract classes.

    The Mathematical representations of the NN are of course also "symbolic". The automatic differentiation, gradient descent, dot product - these are "symbolic", they rely on "symbolic" abstract language and namely mathematical symbols to express it (at the level of representation of the developers ).

    Any developed intelligence, known to us, eventually reaches to symbolic hierarchical representations, not just to a mess of numbers. There's some kind of classification, ranking and digitized/discrete sampling, required to produce distinguished "patterns" and to take definite decisions.

    The NN are actually also not just a mess of numbers - there's a strict structure of the way which filter is computed at which layer, what is multiplied by what etc.

    "Vectors" are mentioned here. Reality and brain are not vectors. We could represent images, models of reality, using our abstractions of vectors, matrices etc., and if we put them in appropriate machinery etc., it could then reproduce some image/appearance/behavior etc. of reality.

    However brain and neurons are not vectors.

    Also when talking about "symbols" - let's first define them precisely.

    Not only the simplest classical/Boolean logic of "IF A THEN B" is "symbolic"...

    What is not "symbolic" in Neural Networks is the raw input, such as images, while the input to "classical" symbolic AI algorithms such as the ones for logical inference in PROLOG or simple NLP using "manual" rules the input is regarded as "symbolic" - text*, not representing full images with dense spatio-temporal correlations etc.

    This however doesn't imply that the input can't produce "symbolic" incremental intermediate patterns by clustering etc. (Where "symbolic" is say, an address of an element among a class of possible higher level patterns within given sensory space etc., like in classification - e.g. simple geometric figures such as recognition of a small blob, line, angle, triangle, square, rectangle etc.)

    [ * NOTE, 26.4.2023. Also, the above doesn't limit such a "symbolic" input to represent dense vectors and images, just by describing the format and the content of each pixel etc, with a representation of the  structure and proper interpretation, i.e. "serialization" and "deserialization". Compression-Decompression (Re-representation). See an example in "Chairs, buildings, caricatures, ... /AGI Digest 2012" about different level of generalizations and detail in natural language and in other more specific representations:  https://artificial-mind.blogspot.com/2017/12/capsnet-capsules-and-CogAlg-3D-reconstruction.html  ]


    * Other more suggestive distinctions

    - Sensori-motor grounded and ungrounded cognition/processing/generalization.
    - Embodied cognition vs purely abstract ungrounded cognition etc. Al
    - Distributed representation vs Fragile highly localized dictionary representation

    "Connectionism" is popular, but a "symbolic" (a more interpretable one) can be based on "connections", traversing graphs,  calculations over "layers" etc. and is supposed to be like that - different types of "deep learning".

    The introduction of Boris Kazachenko's AGI Cognitive Algorithm emphasizes that the algorithm is "sub-statistical", a non-neuromorphic deep learning, comparison first, and should start from raw sensori data - the symbolic data should comes next. However this is again about the input data.

    Its code forms hierarchical patterns having definite traversable and meaningful structures - patterns - with definite variables, which refer to concepts such as corresponding match, gradient, angle, difference, overlap, redundancy, predictive value, deviation to template etc. to real input or to lower or higher level patterns. To me these are "symbols" as well, thus the algorithm is symbolic (as any coded algorithm), while it's input is sub-symbolic, as is required for a sensori-motor grounded AGI algorithm.

    See also XAI - explainable, interpretable AI which is aimed at making the NN "more symbolic" and to bridge them. The Swiss DeepCode startup explain their success in the combination of "non-symbolic" Deep Learning and programming-language-like technologies for analysis such as parsing etc. i.e. clearly "symbolic" structures.


    Read More

    Saturday, April 13, 2019

    // // Leave a Comment

    DeepCode's Martin Vechev's recent interview

    An interview from March 2019 on code synthesis, automatic code reviews and suggestions for improvement etc. and the product they already offer:

    https://sifted.eu/articles/ai-is-coming-for-your-coding-job/

    Demo for tensorflow suggestions

    As of Vechev's claims in the end about what is hard to be automated (maybe in order not to offend the developers too much), and the explanations that sophisticated software such as Word Processors, 25 or 45 million lines of code etc. are not supposed to be coded automatically in another related article in French :"Computer programmers are approaching their end"https://www.lesechos.fr/tech-medias/intelligence-artificielle/informatique-les-codeurs-programment-ils-leur-fin-239772

    I challenge some of the claims of difficulty of automation, it could be done with focus and clever meta-design and mapping to sensori-motor spaces and would work incrementally even without NeuralNets and brute force-like search over "everything ever written".
    Read More

    Friday, March 15, 2019

    // // Leave a Comment

    Montreal.AI Academy Cheetsheet

    Read More

    Monday, February 25, 2019

    // // Leave a Comment

    Reinforcement Learning Study Materials

    Selected resources (to be extended)

    https://sites.google.com/view/deep-rl-bootcamp/lectures

    The Promise of Hierarchical Reinforcement Learning

    + 15.3.2019
    Read More

    Saturday, February 23, 2019

    // // Leave a Comment

    Учене с подкрепление/подсилване/утвърждение - езикови бележки | Terminology Remarks on Reinforcement Learning Tutorial in Bulgarian

    Чудесни практически уроци за начинаещи от С.Пенков, Д.Ангелов и Й.Христов в Дев.бг:

    Част I

    Част II

    Някои терминологични бележки от мен, ДЗБЕ/ДРУБЕ 

    "Транзиционна функция" - функция на преходите, таблица на преходите.

    В лекциите от курса "Универсален изкуствен разум" ползвах "подсилване" (може и подкрепление) и подчертавах произхода на термина от поведенческата психология ("бихевиоризъм").

    Има връзка и с невропсихологията (действието на допамина) - награждаващите поведения се подкрепят/подсилват (и "затвърждават") и затова в последствие се изпълняват "с по-голяма вероятност". Тези, които не носят полза, не се подкрепят.

    В друга литература обаче също ползват "утвърждение".

    На езиково близкия руски също е "подкрепление".

    http://research.twenkid.com/agi/2010/Reinforcement_Learning_Anatomy_of_human_behaviour_22_4_2010.pdf

    https://en.wikipedia.org/wiki/Reinforcement  (виж други езици)

    http://research.twenkid.com/agi/2010/Intelligence_by_Marcus_Hutter_Agent_14_5_2010.pdf

    2. Част:

    (...)

    Имплицитен - неявен;
    Произволни - случайни

    (За числата, Random така се води на бълг. в статистиката; произволен достъп - за памет, но там има "съзнателна воля", избор)

    Семплиране, семпъл - отчитане, отчет;

    Терминално - крайно, заключително;

    Еквивалентно - равнозначно, равносилно.

    Мисля че за "пристрастно" зарче си има термин, но не се сещам.

    Стилистична бележка - "Ако е," не звучи добре (предполагам превод от If it is such/so); може да се повтори условието, за да е най-ясно, или "Ако е така" или да се преработи предното изречение.

    Корекции, които забелязах: "фунцкия" и "теоритично" (теорЕтично).

    И накрая: би било полезно, ако има таблица или отделна страница с термините, с различни варианти на преводите, когато има разногласия.

    Илюстрации на Монте Карло и постепенното получаване на по-точни резултати - все по-малко шум и по-ясно изображение:

    https://www.shadertoy.com/results?query=+monte+carlo

    https://www.shadertoy.com/results?query=global+illumination


    Read More

    Wednesday, February 20, 2019

    // // Leave a Comment

    Спомени за българската високотехнологична индустрия от фото архивите | Memories from the Bulgarian High-Tech Industry

    Снимки от производството и настройката на компютри, електроника и други машини и техника от българския държавен архив от 50-те до 80-те години.

    From the Bulgarian State Archives, from the 50-ies to the 80-ies.

    http://www.archives.government.bg/bgphoto/004.08..pdf?fbclid=IwAR0WWNWCCaa41ZXVUSu8himrnqAC_fmIZ25REJ8OHP4DAlD_THpkNzTDWpw

    Thanks to D. from the Compu  computer museum.

    Read More

    Tuesday, February 12, 2019

    // // Leave a Comment

    On the paper: Hierarchical Active Inference: A Theory of Motivated Control and Conceptual Matches to Todor's Theory of Universe and Mind

    Comment on:

    From "Trends in Cognitive Scinece", vol.22, April 2018.
    An opinion article 
    Hierarchical Active Inference: A Theory of Motivated Control

    Giovanni Pezzulo, Francesco Rigoli, Karl J.Friston
    https://doi.org/10.1016/j.tics.2018.01.009

    It's an excellent paper, would be insightful and accessible for beginners in AGI, psychologists and for seeing the big picture like "On intelligence" etc. and for readers who like divergent thinking and seeing mappings to real agent behavior and "macro" phenomenons. Good questions, huge set of references, mapping to brain areas and nueroscience research.

    However as of architectural, philosophical ideas it sounds too similar to my own"Theory of Universe and Mind", published in articles mainly in the Bulgarian avant-garde e-zine "Sacred Computer" (Свещеният сметач) between late 2001 and early 2004. Its ideas were presented/suggested also in the world's first University course in AGI in 2010 and 2011.

    Thanks to Eray Ozkural who was familiar with Friston's work and we had an interesting discussion in Montreal.AI FB's page, see a recent post regarding his work in "Ultimate AI" and "AI Unification".

    The term "active inference" sounds pretentious, it means using a model of the world in order to act, I assume in opposite to being simply reactive as in simpler RL models. However IMO that's supposed to be obvious, see below.

    Theory of Universe and Mind

    The terminology and content of that 2018 "opinion" paper strongly reminded me of the teenage writings of myself from the early 2000s. The term "control" (the cybernetics influence), the need of prediction/reward computation at different time scales/time steps, cascade increment of the precision (resolution of control and resolution in perception); specific examples of "behaviorintrospective" analysis and specific selection of the actions etc.

    "Theory of Universe and Mind", or "my theory", started with the hierarchy and "active inference" as obvious requirements (not only to me, I believe).

    Mind is a hierarchical simulator of virtual universes, it makes predictions - controls ("cause" is a better term, though) at the lowest level. The hierarchical simulations are built from the patterns in the experience. Highest levels are built of sequences of selected patterns at lower level ("instructions", correlations) which are predictive.

    At the lowest level all combinations of instructions are legal, the system shouldn't hang.

    However at the higher levels, only selected ones work, not all combinations of low level instructions are correct which makes the search cheaper. That implies reduction of the possible legal states, which as far as I understand in F.'s terms is called "reduction of the free energy". 

    So the mind, the hierarchy of virtual universes, makes predictions about the future in the Universe - as perceived at the lowest level virtual universe - and causes desired changes in the input, by aiming at maximizing the desired match. 

    Through time it aims at increasing the resolution of perception and causality-control while increasing also the range of prediction and causation. That's what a human does in her own personal development, as well as what the humanity's "super mind", the edge of science and technology.

    My old writings were also explicit about the predictions at different time-scales, precisions and domains - for a sophisticated mind there's no one single "best" objective behavioral trajectory, there are many, because there are contradictory reward-domains (like not eating chocolate, because it may lead to dental cavities, or eating it, because it's a pleasure now).  There's also a prediction horizon, uncertainty.

    In the domain of Reinforcement learning, there are two types of reward, called "cognitive" and "physical". Cognitive is about making correct predictions, that is "curiosity", exploration etc., while physical is about having the desired input in the senses, implying a desired state - or "pleasure".

    There must be accord between these domains and a sophisticated enough hierarchy and various time-space-precision ranges, otherwise the system would fall into a vicious cycle and have an "addiction".

    In the paper, they have called my cognitive reward/drive "cold domain" (choice probability, plans action sequences, policies) and my "physical" one - "hot domain" (homeostasis, incentive values, rewards).

    Etc.

    ...


    The "Theory of Universe and Mind" works and the 2010's slides could be found in this blog, in the online archives of "Sacred Computer" (Свещеният сметач - the original texts in Bulgarian), and on twenkid.com, http://research.twenkid.com/agi/2010/ 

    Read More
    // // Leave a Comment

    US Government Officially Declares AI as a Priority

    Read More

    Monday, February 11, 2019

    // // Leave a Comment

    DARPA's Common Sense Reasoning Program

    Bottom-up and top-down reasoning should meet in the middle.

    DARPA and others seem to have realized that, as well as the importance of Developmental Psychology.

    https://www.darpa.mil/news-events/2018-10-11


    Redirected from: https://hbr.org/2019/01/the-future-of-ai-will-be-about-less-data-not-more?utm_campaign=hbr&utm_medium=social&utm_source=twitter


    See a table with AGI/AI startups, key people and directions:

    https://artificial-mind.blogspot.com/2016/06/agi-start-ups-and-research-institutes.html


    ...


    Interesting AI news digest: http://artificial-intelligence.startupdigest.com/issues/future-of-ai-china-s-facial-emotional-recognition-weaponization-of-ai-more-154170
    Read More

    Saturday, February 9, 2019

    // // Leave a Comment

    Origin of the Term AGI - Artificial General Intelligence

    The fellow AGI researcher and developer Peter Voss told the story in his blog in Feb 2017:

    What is AGI
    https://medium.com/intuitionmachine/what-is-agi-99cdb671c88e?fbclid=IwAR1rm684qRx-oMZeyKWvioJDbhZT7YvVIp7HSZYpJAjMDsxN5lyow9T5XYM

    "Just after year 2000" Voss and a few other researchers realized that there were hardware and scientific prerequisites to return to the original goal of AI. At the time they found about "a dozen" other researchers who were involved with research in that direction. Peter, Ben Goertzel and Shane Legg selected the term "A-General-I", which was made official in a 2007 book written together with the other authors: "Artificial General Intelligence".

    https://www.springer.com/gb/book/9783540237334

    (According to the info on Amazon, published Feb 2007)

    I've encountered Ben Goertzel's thoughts about that early 2000s official embarking of the AGI as an idea again by himself and his friends and that the term was coining in the early 2000s, I started to use the term after influence by his circle. I haven't read the book, though.

    I had a related terminology story also from the early 2000s, which I may tell in another post.

    Other interesting AGI-related articles from Peter's blog:

    https://medium.com/intuitionmachine/agi-checklist-30297a4f5c1f

    https://medium.com/intuitionmachine/cognitive-architectures-ea18127a4d1d

    Read More

    Friday, January 4, 2019

    // // Leave a Comment

    CogAlg News - Boris Declares a Lead Developer

    The "Cognitive Algorithm" (CogAlg) AGI project of Boris Kazachenko has found a new talent, which for a month of work is listed as "lead developer", according to the Contributing list in github

    Good luck and looking forward for results!

    I'm attached to this project in many ways, although I don't like some aspects of it.

    AFAIK I was the first researcher-developer with credentials to acknowledge Boris' ideas back in time (since I found they matched "my theory").

    Then I tried to promote him, his theory was suggested and presented in the world first University courses in AGI in 2010 and 2011*.

    His prizes were first announced in the comments section of a publication of the AGI-2010 course program.

    I've been the first and the only recurring prize winner so far with the biggest total prize for more than 8 years.


    * In comparison, the artificial neural networks were given 3 slides in the lecture "Narrow AI and Why it failed". :) Now DNN achieved a lot, but it's still "narrow AI" without understanding and structure (not a "XAI", explainable/interpretable), poor transfer learning, shallow etc. Other authors have already defined extensively ANN's faults, such as Gary Marcus and in a more concise and his-theory-related form - Boris himself.

    My slides:

    The 2010 and 2011 course had only three slides specifically about ANN, only in one of the introductory lectures about “Narrow AI” and why it failed.

    See slides 34–35:

    http://research.twenkid.com/agi/2010/Narrow_AI_Review_Why_Failed_MTR.pdf

    Translated from Bulgarian it says that ANNs:

    * Pretend to be universal, but it is not working so far
    * Represent TLU — threshold logic units
    * They are graphs with no cycles, having vertices with different weights
    * Input, output and hidden layers
    * They are trained with samples (e.g. photos), which are classified by altering the weights
    * Then when a new photo is fed, it’s attributed to a particular class
    * Computationally heavy, holistic, chaotic
    * Can’t work with events and time relations
    * Static input

    Slide 36, Recurrent NN, a more positive review:

    Рекурентни невронни мрежи

    ● Мрежи на Хопфийлд. Асоциативна памет.
    ● Има цикли в графа – биологично по-верни.
    ● Започват да работят с понятие за време и събития.
    ● Long Short-Term Memory (LSTM) – Jurgen Schmidhuber – представител на Силното направление, опити за универсален ИР.
    ● Приложения: Разпознаване на ръчен почерк; робот, учещ с подкрепление в частично наблюдаема среда; композиция на музика и импровизация и др.


    Read More