Monday, June 10, 2019

SuperCogAlg Blob Formation Interactive Inspection Videos


Clustering (segmentation) with my current implementation up to "blobs". Still a work in progress, but interactivity is growing.

It's not yet optimized, but the rendering of the items is deliberately slow in order the relations between them to be visible during drawing. The text part resembles PDF file rendering in the past on a slow PC with poor graphics card.

In the video games line, the first example resembles Conway's "Game of life", even cells with nuclei appear. :)





One of the Plovdiv's hills: (photo: mine)




An image from "Super Contra" NES video game (by Konami)



Web Text, big fonts (from the Vanilla CogAlg's issues)


The next step in the project besides debugging is the so called intra-blob formation - incremental segmentation within the ranges of each blob using different comparison criteria. I'll also be adding more interactivity etc.




More examples





Segmented frame from Konami's "Super Contra"

The beach in Samos, the island of Pythagoras. Original photo: MK.



Super CogAlg
...
Raw video links:

https://youtu.be/KPHYL_m8eLw
https://youtu.be/vgtfiUssTmk
https://youtu.be/D-BI65oFyYs

Saturday, June 1, 2019

Super CogAlg - Segments-Blobs

Call for partners and co-founders

A processed frame from the video game "Super Contra" and some segment-blob-like structures*, looking like Tetris, rendered by my work-in-progress implementation and environment for general incremental pattern discovery. Currently it follows some of the basic comparisons of  the "Vanilla" version of CogAlg, but not exactly and probably will diverge and extend, since it's not supposed to be just a port.

It is also supposed to be interactive and "playable".

"The spirit of the video games" - now in color. :)

The rocket-picture by the Vanilla CogAlg displays more advanced structures, though.


* segments and blobs - see in CogAlg


Edit: 7-6-2019

Still a work in progress:




....


Friday, May 17, 2019

Call for Co-founders of an R&D startup in AGI & Code Synthesis

Last update: 21.6.2019

I am looking for partners for creating a Code Synthesis and Artificial General Intelligence R&D company. Short term goals and domains include Computer Vision, Generative-Analytical Meta-Systems Research and Development, Code Synthesis, Automatic Software Development and Grounded Interactive Sensori-Motor 
Machine Learning and Programming.

The AGI methods as of current plan would include realization of the general directions of the theory I've proposed in my early works and the world's first University course in AGI, which conceptually is related to CogAlg and at high level also to the so called "Memory-prediction framework" and the "Hierarchical Temporal Memory", as well as the so called "Active Inference".

In short, a hierarchical sensori-motor predictive simulation of virtual universes, created and adjusted by interactive exploration of the world (input space).

I also plan to specify and apply in practice my yet undisclosed observations and generalizations in the domains of Developmental Psychology and Theory of intelligence/knowledge/learning, assisted by interdisciplinary studies and multi-domain creative activities. This body of insights is growing on-the-go as well.

Another short-term target is analysis, modeling and accelerated and high-performance development of "CogAlg", a project to which I've been a contributor, because its core ideas of incremental predictive modeling resembled the ideas of my own theory. Currently the official version of this algorithm is progressing slowly and it's developed in Python.

Updates:

Check newer blog publications

  • 24.5.2019: Lately I've been working on a high performance implementation and testing ideas from CogAlg.
  • 9.6.2019: Implementation of basic blob formation and display. Work in progress, but with interactivity.
  • 21.6.2019: Frame-Blob stage debugged, C++.

    Contact me: http://research.twenkid.com

  • Sunday, May 12, 2019

    Ghosts of Sprites of Classic Video Games in CogAlg Output

    The algorithm is not practical yet, but in an output last month it started to capture the rough shape of a rocket in the sky and produced funny artifacts on the even background. Thanks to Khan for debugging the then-current version to a point of running.

    The spirit of the sprites of early video game systems has reincarnated in the debug pictures, displaying the coverage of the basic 2D-patterns, found by the algorithm, so called "blobs".

    First image - the spirit of Pac-Man, the ghosts from "Pac-Man" and other monsters.





    Below, the bushes and the trees of the classic early 80-ies games "Bug attack"/"Centipede"(similar), pirated for Pravetz-82/M as "Нашествие".








































    A picture of  a cat's face, resembling Space Invaders or an "astronaut" (or an alien) vs a snail's eyes alien.





    The algorithm turned out as a pre-generator of levels for classic video games...

    The first milestone is achieved!... ;)


    Thursday, April 18, 2019

    Collection of Best papers in Computer Science conferences since 1996

    An interesting historical resource and one for getting general insights on the trends - good job for the authors of the list. I notice that it's missing the AGI conference though, if it counts as CS.

    https://jeffhuang.com/best_paper_awards.html?fbclid=IwAR0Yc_0K689o8OhmfB0oUI5VKcWg6bg3MuG95yHgcyIc8TU5ItkMw4IzKj4

    Thanks to the source: https://www.facebook.com/groups/MontrealAI/permalink/631419193986583/

    Sunday, April 14, 2019

    Neural Networks are Also Symbolic - Conceptual Confusions in ANN-Symbolic Terminology

    Todor's remark to:

    G.H. - On the Nature of Intelligence  on Creative Destruction Lab
    Публикуван на 1.11.2018 г.
    This talk is from the Creative Destruction Lab's fourth annual conference, "Machine Learning and the Market for Intelligence", hosted at the University of Toronto's Rotman School of Management on October 23, 2018.
    https://youtu.be/MhvfhKnEIqM

    The Hinton's message that there should be processing at different time scales is correct, but yet known and obvious. In my "school" it's about working at different resolutions of perception and causation-control, it is also in Hawkins's, in Friston's, in Kazachenko's CogAlg.

    I'd challenge the introduction in a funny way though, although I realize that the context is in fact of a silly pitching session and it is perhaps with a clear purpose: note the title "Market for Intelligence" and "Management". The message is "don't fund  symbolic methods, fund mine". 

    The division symbolic and "vector" seems sharp, but semantically it's defined on confused grounds and confused concepts


    With "symbolic" they apparently mean some kind of "simple" ("too obvious" or "fragile") logical programming like "Prolog" or something like that and something with some kind of "short code" or "obvious" and "simple" relations, lacking hidden layers, or shallow etc. that "doesn't work".

    While with "non-symbolic" they address something with "a lot" of numerical parameters and calculations which are not obvious and also, likely, are not understood - by the designers as well. Ad-hoc mess for example, and the job of  "understanding" is done by the data themselves and the algorithm.

    That doesn't make them "not symbolic" though, even in that messy state they are

    Let's investigate one Supervised Tensorflow Convolutional NN or whatever high performance library.

    The user Tensorflow code is developed in Python (symbolic formal language).

    The core is developed in C/C++?, CUDA/GPU (C/C++) - pretty symbolic, abstract and considered "hard".

    Data is represented as numbers (sorry, they are also symbols, reality is turned into numbers by machines and by our mind).

    The final classification layer consists of set of artificial labels - symbols - which lack internal conceptual structure, except for the users - humans, - who at that level operate with "symbols" - abstract classes.

    The Mathematical representations of the NN are of course also "symbolic". The automatic differentiation, gradient descent, dot product - these are "symbolic", they rely on "symbolic" abstract language and namely mathematical symbols to express it (at the level of representation of the developers ).

    Any developed intelligence, known to us, eventually reaches to symbolic hierarchical representations, not just to a mess of numbers. There's some kind of classification, ranking and digitized/discrete sampling, required to produce distinguished "patterns" and to take definite decisions.

    The NN are actually also not just a mess of numbers - there's a strict structure of the way which filter is computed at which layer, what is multiplied by what etc.

    "Vectors" are mentioned here. Reality and brain is not a vector. We could represent images, models of reality, using our abstractions of vectors, matrices etc., and if we put them in appropriate machinery etc., it could then reproduce some image/appearance/behavior etc. of reality.

    However brain and neurons are not vectors.

    Also when talking about "symbols" - let's first define them precisely.

    Not only the simplest classical/Boolean logic of "IF A THEN B" is "symbolic"...

    What is not "symbolic" in Neural Networks is the raw input, such as images, while the input to "classical" symbolic AI algorithms like for logical inference in PROLOG or simple NLP using "manual" rules is "symbolic" - text, not representing full images with dense spatio-temporal correlations etc.

    This however doesn't imply that the input can't produce "symbolic" incremental intermediate patterns by clustering etc. (Where "symbolic" is say, an address of an element among a class of possible higher level patterns within given sensory space etc., like in classification - e.g. simple geometric figures such as recognition of a small blob, line, angle, triangle, square, rectangle etc.)


    * Other more suggestive distinctions

    - Sensori-motor grounded and ungrounded cognition/processing/generalization.
    - Embodied cognition vs purely abstract ungrounded cognition etc. Al
    - Distributed representation vs Fragile highly localized dictionary representation

    "Connectionism" is popular, but a "symbolic" (a more interpretable one) can be based on "connections", traversing graphs,  calculations over "layers" etc. and is supposed to be like that - different types of "deep learning".

    The introduction of Boris Kazachenko's AGI Cognitive Algorithm emphasizes that the algorithm is "sub-statistical", a non-neuromorphic deep learning, comparison first, and should start from raw sensori data - the symbolic data should comes next. However this is again about the input data.

    Its code forms hierarchical patterns having definite traversable and meaningful structures - patterns - with definite variables, which refer to concepts such as corresponding match, gradient, angle, difference, overlap, redundancy, predictive value, deviation to template etc. to real input or to lower or higher level patterns. To me these are "symbols" as well, thus the algorithm is symbolic (as any coded algorithm), while it's input is sub-symbolic, as is required for a sensori-motor grounded AGI algorithm.

    See also XAI - explainable, interpretable AI which is aimed at making the NN "more symbolic" and to bridge them. The Swiss DeepCode startup explain their success in the combination of "non-symbolic" Deep Learning and programming-language-like technologies for analysis such as parsing etc. i.e. clearly "symbolic" structures.


    Saturday, April 13, 2019

    DeepCode's Martin Vechev's recent interview

    An interview from March 2019 on code synthesis, automatic code reviews and suggestions for improvement etc. and the product they already offer:

    https://sifted.eu/articles/ai-is-coming-for-your-coding-job/

    Demo for tensorflow suggestions

    As of Vechev's claims in the end about what is hard to be automated (maybe in order not to offend the developers too much), and the explanations that sophisticated software such as Word Processors, 25 or 45 million lines of code etc. are not supposed to be coded automatically in another related article in French :"Computer programmers are approaching their end"https://www.lesechos.fr/tech-medias/intelligence-artificielle/informatique-les-codeurs-programment-ils-leur-fin-239772

    I challenge some of the claims of difficulty of automation, it could be done with focus and clever meta-design and mapping to sensori-motor spaces and would work incrementally even without NeuralNets and brute force-like search over "everything ever written".

    Monday, February 25, 2019

    Reinforcement Learning Study Materials

    Selected resources (to be extended)

    https://sites.google.com/view/deep-rl-bootcamp/lectures

    The Promise of Hierarchical Reinforcement Learning

    + 15.3.2019

    Saturday, February 23, 2019

    Учене с подкрепление/подсилване/утвърждение - езикови бележки | Terminology Remarks on Reinforcement Learning Tutorial in Bulgarian

    Чудесни практически уроци за начинаещи от С.Пенков, Д.Ангелов и Й.Христов в Дев.бг:

    Част I

    Част II

    Някои терминологични бележки от мен, ДЗБЕ/ДРУБЕ 

    "Транзиционна функция" - функция на преходите, таблица на преходите.

    В лекциите от курса "Универсален изкуствен разум" ползвах "подсилване" (може и подкрепление) и подчертавах произхода на термина от поведенческата психология ("бихевиоризъм").

    Има връзка и с невропсихологията (действието на допамина) - награждаващите поведения се подкрепят/подсилват (и "затвърждават") и затова в последствие се изпълняват "с по-голяма вероятност". Тези, които не носят полза, не се подкрепят.

    В друга литература обаче също ползват "утвърждение".

    На езиково близкия руски също е "подкрепление".

    http://research.twenkid.com/agi/2010/Reinforcement_Learning_Anatomy_of_human_behaviour_22_4_2010.pdf

    https://en.wikipedia.org/wiki/Reinforcement  (виж други езици)

    http://research.twenkid.com/agi/2010/Intelligence_by_Marcus_Hutter_Agent_14_5_2010.pdf

    2. Част:

    (...)

    Имплицитен - неявен;
    Произволни - случайни

    (За числата, Random така се води на бълг. в статистиката; произволен достъп - за памет, но там има "съзнателна воля", избор)

    Семплиране, семпъл - отчитане, отчет;

    Терминално - крайно, заключително;

    Еквивалентно - равнозначно, равносилно.

    Мисля че за "пристрастно" зарче си има термин, но не се сещам.

    Стилистична бележка - "Ако е," не звучи добре (предполагам превод от If it is such/so); може да се повтори условието, за да е най-ясно, или "Ако е така" или да се преработи предното изречение.

    Корекции, които забелязах: "фунцкия" и "теоритично" (теорЕтично).

    И накрая: би било полезно, ако има таблица или отделна страница с термините, с различни варианти на преводите, когато има разногласия.

    Илюстрации на Монте Карло и постепенното получаване на по-точни резултати - все по-малко шум и по-ясно изображение:

    https://www.shadertoy.com/results?query=+monte+carlo

    https://www.shadertoy.com/results?query=global+illumination


    Wednesday, February 20, 2019

    Спомени за българската високотехнологична индустрия от фото архивите | Memories from the Bulgarian High-Tech Industry

    Снимки от производството и настройката на компютри, електроника и други машини и техника от българския държавен архив от 50-те до 80-те години.

    From the Bulgarian State Archives, from the 50-ies to the 80-ies.

    http://www.archives.government.bg/bgphoto/004.08..pdf?fbclid=IwAR0WWNWCCaa41ZXVUSu8himrnqAC_fmIZ25REJ8OHP4DAlD_THpkNzTDWpw

    Thanks to D. from the Compu  computer museum.