Friday, January 4, 2019

CogAlg News - Boris Declares a Lead Developer

The "Cognitive Algorithm" (CogAlg) AGI project of Boris Kazachenko has found a new talent, which for a month of work is listed as "lead developer", according to the Contributing list in github

Good luck and looking forward for results!

I'm attached to this project in many ways, although I don't like some aspects of it.

AFAIK I was the first researcher-developer with credentials to acknowledge Boris' ideas back in time (since I found they matched "my theory").

Then I tried to promote him, his theory was suggested and presented in the world first University courses in AGI in 2010 and 2011*.

His prizes were first announced in the comments section of a publication of the AGI-2010 course program.

I've been the first and the only recurring prize winner so far with the biggest total prize for more than 8 years.

* In comparison, the artificial neural networks were given 3 slides in the lecture "Narrow AI and Why it failed". :) Now DNN achieved a lot, but it's still "narrow AI" without understanding and structure (not a "XAI", explainable/interpretable), poor transfer learning, shallow etc. Other authors have already defined extensively ANN's faults, such as Gary Marcus and in a more concise and his-theory-related form - Boris himself.

My slides:

The 2010 and 2011 course had only three slides specifically about ANN, only in one of the introductory lectures about “Narrow AI” and why it failed.

See slides 34–35:

Translated from Bulgarian it says that ANNs:

* Pretend to be universal, but it is not working so far
* Represent TLU — threshold logic units
* They are graphs with no cycles, having vertices with different weights
* Input, output and hidden layers
* They are trained with samples (e.g. photos), which are classified by altering the weights
* Then when a new photo is fed, it’s attributed to a particular class
* Computationally heavy, holistic, chaotic
* Can’t work with events and time relations
* Static input

Slide 36, Recurrent NN, a more positive review:

Рекурентни невронни мрежи

● Мрежи на Хопфийлд. Асоциативна памет.
● Има цикли в графа – биологично по-верни.
● Започват да работят с понятие за време и събития.
● Long Short-Term Memory (LSTM) – Jurgen Schmidhuber – представител на Силното направление, опити за универсален ИР.
● Приложения: Разпознаване на ръчен почерк; робот, учещ с подкрепление в частично наблюдаема среда; композиция на музика и импровизация и др.

Tuesday, December 25, 2018

Developmental Approach to Machine Learning? - article by L.Smith and L.Slone - Agreed

Yes, agreed. A good read, suggesting developmental machine learning, spatio-temporally continuous input data etc.:

See the concept of “shape bias" from Developmental psychology. That's related to  discussions in the "AGI Digest" on recognition of "buildings, chairs, caricatures" ... and other articles from this research blog, regarding 3D-reconstruction at varying resolution/detail as one of the crucial operations in vision published in this blog and the general developmental direction which is driven from one of the very fist articles here about the "Educational test".


Front. Psychol., 05 December 2017 |

A Developmental Approach to Machine Learning?

  • Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, United States

  • See also:

Tuesday, December 18, 2018

Human-centered AI by Stanford University - 8 years after Todor's Interdisciplinary Course in AGI in Plovdiv 2010


Introducing the initiative - Oct 19, 2018:

"But guiding the future of AI requires expertise far beyond engineering. In fact, the development of Human-Centered AI will draw on nearly every intellectual domain"

The world first interdisciplinary course in AGI in Plovdiv University started in April 2010 and was proposed as an idea to my Alma Mater in December 2009.

Among the core messages of the course were the importance of the interdisciplinarity/multidisciplinarity and the suggested leadership in the research by such persons. I've been a proponent of that approach in my writings and discussions since my teenage years, being a "Renaissance person" myself.

See also the interview with me, published in December 2009 in the popular science magazine "Obekty"* after I have given a lecture on the Principles of AGI to general public in Technical University, Sofia for the European "Researchers's Night" festival.

- Where do the researchers' efforts should be focused in order to achieve Artificial General Intelligence (AGI)? 
First of all, research should be lead by interdisciplinary scientistswho are seeing the big pictureYou need to have a grasp of Cognitive Science, Neuroscience, Mathematics, Computer Science, Philosophy etc. Also, creation of an AGI is not just a scientific task, this is an enormous engineering enterprise – from the beginning you should think of the global architecture and for universal methods at low-level which would lead to accumulation of intelligence during the operation of the system. Neuroscience gives us some clues, neocortex is “the star” in this field. For example, it's known that the neurons are arranged in sort of unified modules – cortical columns. They are built by 6 layers of neurons, different layers have some specific types of neurons. All the neurons in one column are tightly connected vertically, between layers, and are processing a piece of sensory information together, as a whole. All types of sensory information – visual, auditory, touch etc. is processed by the interaction between unified modules, which are often called “the building blocks of intelligence”.  
- If you believe that it's possible for us to build an AGI, why we didn't manage to do it yet? What are the obstacles? 
I believe that the biggest obstacle today is time. There are different forecasts, 10-20-50 years to enhance and specify current theoretical models before they actually run, or before computers get fast and powerful enough. I am an optimist that we can go there in less than 10 years, at least to basic models, and I'm sure that once we understand how to make it, the available computing power would be enough. One of the big obstacles in the past maybe was the research direction – top-down instead of bottom-up, but this was inevitable due to the limited computing power. For example, Natural Language Processing is about language modeling; language is a reduced end result of so many different and complex cognitive processes. NLP is starting from the reduced end result, and is aiming to get back to the cognitive processes. However, the text, the output of language, does not contain all the information that the thought that created the text contains.
On the other hand, many Strong AI researchers now are sharing the position that a “Seed AI” should be designed, that is a system that processes the most basic sensory inputs – vision, audition etc. Seed AI is supposed to build and rebuild ever more complex internal representations, models of the world (actually, models of its perceptions, feelings and its own desires and needs). Eventually, these models should evolve to models of its own language, or models of human's natural language. Another shared principle is that intelligence is the ability to predict future perceptions, based on the experience (you have probably heard of Bayesian Inference and Hidden Markov Models), and that intelligence development is improvement of the scope and precision of its predictions.
Also, in order the effect of evolution and self-improvement to be created, and to avoid intractable combinatorial explosion, the predictions should be hierarchical. The predictions in an upper level are based on sequences of predictions (models) from the lower level. Similar structure is seen in living organisms – atoms, molecules, cellular organelles, cells, tissues, organs, systems, organism. The evolution and intelligence are testing which elements are working (predicting) correctly. Elements that appeared to work/to predict are fixed, they are kept in the genotype/memory, and are then used as building blocks of more complex models at a higher level of the hierarchy.

* The original interview was in Bulgarian

As the colleagues at Stanford enumerate: their University was the place where the term AI was coined by McCarthy, where computer vision was pioneered (the Cart mobile robot; Hans Moravec), self-driving cars won DARPA Grand challenge in 2005, ImageNet, [Coursera], ... They are located in the heart of the Sillicon Valley, employ a zillion of the best students and researchers in CS, NLP, EE, AI, Neuroscience, WhatEver.

The Plovdiv course was created practically for free with no specific funding, just a regular symbolic honorarium for the presentation.

Note also that the course was written and presented in Bulgarian.

See also:
Saturday, February 24, 2018
MIT creates a course in AGI - eight years after Todor Arnaudov at Plovdiv University


The paradox is not so surprising, though, since most people and the culture are made for narrow specialists, both in Academia and everywhere. The "division of labor" etc. British and US wisdoms for higher profits in the rat race.

Thanks to prof. D.Mekerov, H.Krushkov, M.Manev who respected my versatility and especially to M.Manev who was in charge to accept the proposal of the course.

PS. There are other proponents of interdisciplinary and multidisciplinary research as well. I recall Gary Marcus from the popular AI journalists about; and of course as early as Norbert Wiener, if I'm not mistaken he explicitly suggested that. (The German philosophers such as Kant and Schopenhauer - as well...)

See my comment of a comment of Gary Marcus regarding Kurzweil's book:

Wednesday, January 23, 2013

Friday, December 7, 2018

Ultimate AI, Free Energy Principle and Predictive Coding vs Todor and CogAlg - Discussion in Montreal.AI forum and Artificial Mind


1. The Interview - the key to true AI by the genius neuroscientist
2. CogAlg 
and Free Energy Principle
3. Discussion at Montreal.AI and the Ultimate AI
3.1. References to Bialek and Tishby early papers on prediction in RL
Ultimate Intelligence Part III ... - an informal review and a  clash of schools of thought
4.1. Intro and acknowledgments
4.2. Criticism
4.2.1. Sigma-Product-Log-Probability mathematical formula fetishism
4.2.2. Too general
Where's the hierarchy?
4.2.4. The sum of rewards and bounded rewards are obvious

4.2.5. The hierarchy as a deadlock breaker
4.2.6. Discussion on specific quotes

5. Conclusion

1. The interview - the key to true AI

A WIRED's interview with Karl Friston was getting popular recently in social media, claiming that the "genius neuroscientist might hold the key to true AI".

Initially it seemed interesting, maybe that was something new and revolutionary, since I've been quite ignorant not knowing of him - or maybe I have forgotten a long time ago?

Well, I took a look of how the topic of "free energy principle" and "predictive coding" is defined in generic sources such as Wikipedia.

The conclusion - yes, I agree, it's the right direction, another related school of thought, but I don't agree that these ideas are so grandiose or original as presented in the press*, they were quite obvious for "my school of thought" since it started around 2001-2004, when I was a 17-19 year old kid, a rebellious teenager who haven't read or cited the contemporary literature.

Edit: the proper recent technical/neuroscientific papers seem at a different level, though, better than the general directions and not that general and lacking hierarchy. Such as this one, suggested by Eray after he read this post. I haven't studied it yet and probably would comment later about this and other related materials:

Deep temporal models and active inference

* Sure, everything in the consumer-world-celebrity-driven media is exaggerated, glamorous, the genius, extraordinary, outstanding etc., that's not an exception.

Connecting general intelligence principles with physics/Universe trends and biology is not that unheard of as well. I assume maybe it was a surprise in circles of some kind of too specialized software developers or too practical RL-ists/maths/ML/developers who didn't care about philosophy, biology, cybernetics etc.

2. CogAlg and Free Energy Principle

I asked the owner of the CogAlg project Boris about his opinion, he said that he's been hearing about that theory "at least from a decade" and in short he didn't seem impressed, because it was "nothing novel".

As of myself, I think the explicit emphasis of the idea of reducing the space of states for the living organisms and intelligence is suggestive for people who face these ideas for the first time, however it's somewhat obvious for hierarchical systems and even simple "machines", as the gears, the pistons etc. serve as "sub-spaces" which limit and guide the space of possible states.

As defined in the most ancient basics of "my theory", the higher level patterns are constructed by selected sequences/sets of elements from the lower level which serve as "instructions" (discrete), therefore not all possible combinations are covered. Only a limited space is legal, which respectively reduces the search/combinatorial space of possibilities at the higher level, therefore it has "a reduced space". That's seen in the hierarchical structures in nature: atoms, molecules, cells, tissues etc.

That "free energy principle" is yet another prove that the general direction towards AGI are getting established in different domains by different researchers.

3. Discussion at Montreal.AI and the Ultimate AI

My criticism that this was not novel in a thread on Montreal.AI facebook page ended up in a discussion with Eray Ozkural - a researcher from Friston's school of thought, a fellow AGI researcher, an author of publications at the AGI conference and knowledgeable of the Reinforcement Learning literature - particularly more than me.

His term for AGI: "Ultimate AI".

It's adding one more to the list of:  A-General-I (AGI),  Universal-AI (UAI) , Strong-AI, General-AI, Human-level AI, Goedel Machine, ... "Versatile Limitless Explorer and Self-Improver" - VLESI (one of mine, if I remember correctly... :) ) etc.

See the original discussion by Todor and Eray:
A discussion about Free energy principle vs other theories about intelligence as prediction

He directed me to the III part of a series of his papers:

Ultimate Intelligence Part III: Measures of Intelligence, Perception and Intelligent Agents 

A nice title.

He mentioned also two pioneers of the prediction paradigm in RL of which I wasn't aware, prior the "early 2000s", the period I suggested: Bialek and Tishby

Papers with promising titles that pop up: Predictability, Complexity, and Learning

The information bottleneck method:

Submitted in 2000-2001, probably coming from late 90s.


4. Ultimate Intelligence Part III ... - an informal review

4.1. Intro and acknowledgments

I reviewed Eray's paper from my perspective and share comments of mine - as a clash of my "school of thought" with his/or theirs. Mine perhaps is more philosophical.

Overall, the paper is fine and I recommend it for studying if you like those "probability-log-maths" proves like in the papers of Hutter & Legg, Solomonoff's algorithmic probability and stuff. It also has good references, both to researchers and papers, which may give you a kickstart into the subject matter. That goes also for the list of other papers by this author, they have interesting titles, I checked only a few myself, though. Good work!

However I have general criticism to that "school", not personally to the author.

My first impression and general criticism is the mathematical formula fetishism, which is present in all kinds of papers like that. Maybe it's also LaTeX-one and towards those small fonts...

Summation, Product, Log, Probability, Wave functions?(the psy at the end) thus "phases" or/and just putting Greek and Latin letters for verbal/simple things, a - action, r - reward, ... Combination of them and...

There we are, everything seems solved or proved, and it passes as academic, goes to conferences.

IMO it's tautological in general. The sense denoted with these letters is defined with natural language words and it proves itself by its definition. It claims that "this is intelligence", computes something/minimize something etc. thus "it's solved".

IMO (to me) simple formulas, while required to represent the ideas "formally", are not much more insightful than defining the formula verbally, which usually is done anyway, above and below the formulas, since it's about such general matters.

On the other hand, it's not practical or is much more confusing to define verbally more specific or complex algorithms. They are not obvious as well and require a real computation with data to see where they go. In these cases it is required to write it in code

The math formulas from these "classical" kind of algorithmic probability papers do not grow up in complexity too much and are kind of obvious in their expected outcomes, because they are stuck at one line or a few lines and I can't see concepts growing up on that.

"Where calculation begins, comprehension ceases" - Schopenhauer.

I understand that this is probably desirable by their authors, but it's not quite incrementally insightful to me.

4.2. Too General

Goes for that school of Algorithmic Probability, Hutter's model etc.

AGI should be general, but not too general, because it turns into generalities or in the deep sea of practical or theoretical uncomputability.

I'm advocate of human-like seed-AGI which develops like a child and there are milestones that it's expected to achieve developmentally.

4.3. Where is the hierarchy?

I didn't find any mention of the word "hierarchy" or "levels" in the paper, while that's crucial in building a real and scaling generally intelligent system and RL agent, as explained below as well. It is also in the heart of many prediction-based or cybernetics schools, such as:

* Ray Kurzweil (I haven't read his "How to build a brain" book, but Eray mentioned the Hierarchical HMM as his approach)
* Jeff Hawkins (hierarchical tempory memory)
* Boris Kazachenko
* The deep learning community
* Preceded by earlier cyberneticians, notably Valentin Turchin and his book "The Phenomenon of Science".
* Edit+: Neuroscience itself, of course; the early Russian and Soviet research - Pavlov etc. Anohin  discusses feedback in 1935 (санкционирующая афферентацуя, later обратная афферентация) - prior to Wiener and the Cybernetics

Is the hierarchy implied  in the paper or other ones of the author as the process of search/adjustment of the highest sum of expected rewards etc.?

However how and when exactly the levels are spawned, separated and interfaced? How the "reward"is quantified for new levels, inter-levels? How the feedback is defined?

In fact that is one of the main questions of the real AGI, which would move it out of the "generalities" territory. Boris Kazachenko is trying to do it in his Cognitive Algorithm.

4.4. Sum of rewards, or The sum of rewards and bounded rewards are obvious

  • I think that the Sum of expected rewards for a selected period ahead as a measure of "intelligent" ("rational") behavior and the need for bounded reward are already not that special thing to say.

    Yes, they have to be declared, but actually that was obvious back in the early 2000s. It seems it's known from the ancient times even from the economy and from human's greed and tendency towards more pleasure* and less displeasure.

    In the academical part of the behavioral/psychological domain, the need to take into account that each single reward is or should be bounded for generally intelligent human agents is known empirically by Simon's satisficing and from the experiment with the rat that presses the lever to stimulate his "pleasure center".

    It's known also in the everyday life by anyone, from observing the behavior in case of addiction, either in mild cases when one gets preoccupied with an activity, or in the severe cases of drug addictions.

    The scientific part in the RL is that it writes explicit formulas and uses mathematical terms like "local minimum/maximum" or "endless cycle" - since when the reward of a particular action is too big, the agent is locked in an endless cycle or a local maximum/minimum.

    However the phenomenon itself is obvious from the every-day experience. I am missing "grounding" and justifications out of the abstract formulas. Just formulas and optimization of some magnitude is tautology. I have similar criticism to CogAlg as well, even though it claims it has its justifications.

    The need for a bounded reward is obvious even theoretically, because a local maximum reward or a "cycle" of actions with maximum local reward would and could limit catastrophically the range of input in which the agent searches, thus would limit the space of environments if it starts from scratch, therefore it would be "less general" and will get into too much "exploitation over exploration".

    The bounded reward could be justified empirically both by the cases of addiction, as mentioned above, where an out of control magnitude of a "reward" (behavior drive) makes the victim a slave of too "narrow range" of repetitive goals; and also by the relations in a human society.  In general, locally, the extreme reward for one agent at the expense of the pain of many others is suppressed, except for the "elite", down to the masses.

    Top-down relations and properties are different than bottom-up and at the same level, they are not symmetrical, but that's different topic.

    Also no one can be "endlessly satisfied", there's a limit. One person, one "element", the mouth needs a little to stretch into a smile :), it couldn't stretch 10 times more.
     I know that the "maths guys" would laugh at these justifications, but presenting something obvious in simple formulas doesn't make it more meaningful, while inducing "formulas" from experience ("operator induction", or pattern discovery, predicting, modelling the input; conversion of representations between domains etc.) is what intelligence does.
    The "money" or other resources could go with less of a limit or seemingly "endlessly", but they are abstract, the money are not mapped directly to the agent, rather are part of more complex systems in which the specific human agents are constituent parts. Such systems could be called "The Corporation", "The Capitalism", "The Economy" etc., but after a limit of "happiness" more money do not increase the general reward for the individual agents. For healthy and functioning human beings,  "happyness" is "computed" based on many more parameters, not just one, especially "the amount of money owned".

    Indeed, IMO just the sum of (any) rewards is not "intelligent" (abstract, universal) per se, unless it's just self-defined like that, that this reward is intelligence, for some abstract reinforcement learning agent.

    Prediction serves as a general definition and I agree with it, however it is also not enough if given alone, because like with the addiction, it could be cheated if defined too simply or if the agent goes into a space with locally-specific features allowing it to predict too easy.

    That's why needs to include as a measure a *widening* of the range and the horizon of prediction and the formation of a generalization hierarchy.

    It needs to be more complex.

    4.5. The Hierarchy as a deadlock breaker

    In order to avoid deadlocks of falling in a maximum/minimum hole, the hierarchical system should constantly project and act in varying time-slices and with varying reward models. A unified model would be an aggregation of those switching sub-models. See articles and slides from my works.

    That implies that for a complex, hierarchical agent, *there is not one absolute best reward path*  and a measurement of intelligence, based just on the reward at the moment is right only in that window of comparison and the selected measures. It's not "objective", it's "best" for that specific selected model of the world and model of the rewards with specific limitations and compared to specific other trajectories, but in general and complex environments and multiple possible goals, there is a multitude of actions that have similar "rewards" or ones which keep the agent "alive" at a macro level. They are all "correct" and "intelligent". Thus intelligence needs to be more specifically defined with more parameters than just one "reward".

    I don't like the definition of Hutter's quoted one: "the wide range of environments". If I'm not mistaken Ben Goertzel had something similar in the 2000s. IMO this is mundane, especially together with simple formulas.

    Mapping it just to simple formulas of probabilities (varius kind ~ various pdf...) as a solution doesn't make it more clear. All kinds of papers like that look like Bayes or almost the same + - logP P(a,b) ... They are reminding of the basics of Shannon's Information Theory, which maybe has been one of my own inspirations for realizing that prediction and compression of information are the "keys to true intelligence".

    A general flaw of that school is that these formulas are too general, indiscriminate, too universal, or as coined in this paper: "ultimate". It implies that they are also inefficient to calculate.

    5. Notes on specific citations

    "An adaptive system that tends to minimize average surprise (entropy) will tend to survive longer."

    That said seems probably true, but only for a non-evolving system. Live as a whole "survives longer" by gradually adapting, trying new things and testing them for fitness, "evolving". At the moment of spawning new organisms when there's sexual reproduction, the exact combination of genes is unknown to the mother and father system, this is a big "surprise".

    6. Conclusion

    This article is underdeveloped, but that's it for now.

    See also:

    * The course program of the world first University course in AGI (see the links in the blog)
    * Todor's Theory of Mind and Universe - his philosophy and principles, expressed in works from his teenager years
    * Materials from the University course in Bulgarian and English:
    * Анализ на смисъла на изречение и ...  March 2004, @ bgit
    * Translated in English:

    Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence

  • Part 1: Semantic analysis of a sentence. Reflections about the meaning of the meaning and the Artificial Intelligence

  • Part 2: Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.

  • Part 3: Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.

  • Part 4 : Intelligence: search for the biggest cumulative reward for a given period ahead, based on given model of the rewards. Reinforcement learning.

  • Many other articles from this Research blog - search them if you care, AGI digest, AGI email list, dscussion on the Cognitive Algorithm site etc.
  • Sunday, November 25, 2018

    Star Symphony in Chepelare - Poetic CGI Music Video | Звездна симфония в Чепеларе

    An Unreal Star Storm watched from the forests of the Rhodope Mountains, Bulgaria

    The premiere of my new music video - a poetic and artistic production with beautiful 2D visual effects, produced using computer vision for automatic compositing, masks generation, objects removal etc. Edited and rendered using my inhouse software "Twenkid FX Studio".

    Watch in darkness and on a big screen in 1920x1080!

    The Eagle from "Star Symphony in Chepelare". Camera operator: Todor Arnaudov

    Star Storm in Bulgaria - Action Version (2:45 min)

    Forest Dream - Perseids in Chepelare, Bulgaria

    Mini Version: 3:44 min with Voyage

    Short version (9:39 min, 4 musical pieces)

    See more info and the Long version from the Twenkid Studio's blog

    Thanks for watching and please, share the videos if you like it!

    Since this is a "Research" blog, let me tell something technical.

    Some of the technologies used:

    Custom GUI NLE video-editor: C++, Win32 (yes), a custom Win32 wrapper, VFW (yea-a-h), Direct3D9 (ahm), HLSL

    "Twenkid FX Studio" is an endless "prototype" in which I've invested too little time and had to redesign a long time ago. Using Win32 sounds a bit insane, but my choice when I started was because there were issues with the usage of another "default" and having bad reputation simple windows library (MFC) - I used Visual Studio Express.

    Sure, there were free GUI class-libraries, but I preferred a smaller code base that was not dependent on additional huge third-party libraries* such as wxWidgets (which was considered and maybe I was wrong not to develop with it).

    Qt had some issues with the license - I didn't want my system to be GPL, and their other license fee was unreasonable. Maybe I could try GTK, but it's also bloated with a lot of dependencies and verbose method calls, similarly to wxWidgets, so apart from being multiplatform, I don't know would it be "simpler" to work with than Win32 or my own Win32 wrapper.

    Furthermore, at the time when I started, FFmpeg or other Linux video libraries seemed undocumented/unaccessible, while I found a windows' one, although a bit outdated - VFW (Video for Windows). DirectShow was the more appropriate choice, but it seemed to me that it had more complex interface and harder access to the raw bitmap, so I decided to use VFW and not delve too much. Maybe I was wrong here again, I had to spend some more time on DirectShow.

    (*Regarding huge code bases with too many fragmented modules - respectively I don't like Boost with its 999999 tiny little files, most of which not used.)

    So I started with simple Win32,  I developed also simple wrapper classes for some controls. I didn't care that it didn't look "beautiful" or "modern", the buttons look-and-feel was not important.

    One reasonable design choice was to develop the GUI in C# with an interface to the core processing through pipes, sockets or memory mapping (file-mapping in Windows), it's still an option. It would go with a "standardized" interface to the core editor so that it could be controlled from all kinds of external GUIs. I did something like that with my speech synthesizer "Toshko 2.070", but only for simple input, not full API to its internals.

    Another possibility is Lua and/or automatic generation of the GUI from the specifications.


    Historically, there were years with zero or a few lines of code added to the project and unfortunately the editor's GUI is still underdeveloped and ugly for casual users which prevents it to be released for external usage out of my "in-house" needs.

    It's pretty fast for some tasks, though. For example, Twenkid FX loads the long version of the "Star Symphony", including alternative disabled video segments and overlays, 200 full HD video files in total, for the first time in a fresh session in about 6-7 seconds 1.5-2 seconds from a laptop's mechanical HDD and external HDD. Maybe that's the total seek-time for so many files.
    If the project is then closed and re-opened again, it loads and is ready in just 2 seconds.

    (It seems that test run was with a highly loaded RAM and page-file slowing it down).

    The GUI has to be improved, though, and possibly rewritten in a multiplatform way to escape that Windows dependency. I've been thinking about that from time to time, but it requires enough of focus to start.

    Perhaps it would be based on FFmpeg, OpenCV and OpenGL, maybe using multiple programming languages (Python and C++, maybe others) with a custom GUI written on top of OpenGL and OpenCV or some light GUI or gaming library, unless I changed my mind and continued with Windows and a DirectX11-12

    Also it's supposed to start utilizing some form of AI already, of course. Finally...


    Custom VFX system and effects for the movie:
    * Python, OpenCV with Python, Numpy; a little C++ and OpenCV in C++ for some retouch work of already rendered video segments during the final stage of the editing.

    I started with Python because I had a prototype for simple reviewing and cutting, besides my main GUI NLE editor. Of course I was assuming that it would be easier to experiment with OpenCV, even though I knew it'd be slower, and initially I didn't know how far I'd go with the visual effects.

    I could use C++ without a big hurdle, since I had experience and experiments with OpenCV C++ as well such as applying computer vision processing over pictures and frames of videos, traversing pixels and changing them during playback etc. The heavier editing system Twenkid FX C++/Direct3D9 was also an option, especially as HLSL shader, the only simple way to add new effects. However it needs a general and sophisticated plug-in subsystem, which is still lacking.

    So I took the Python road this time.

    It got too slow for some operations, then some tricks with Numpy fancy indexing sped it up, for one of the early effects: 60 times, from about 30 seconds per frame to about 0.5 seconds per frame. However it still remained slow for complex effects, sometimes taking several seconds per frame.

    Ironically, the slow speed sometimes was "right", allowing real-time adjustments during rendering, virtual camera operating for Pan & Scan sequences etc., without slowing down the playback or stepping manually frame-by-frame.

    Of course, I had better worked with C++ and GLSL or/and HLSL shaders from the start.

    CogAlg Prize

    Nevertheless that performance-wise wrong design decision and involvement with Python and Numpy directed me to check the CogAlg* project, then eventually to contribute to the debugging of the stuck frame_dblobs function and to win a prize.

    Python is a bad choice for a "non-neuromorphic deep learning" for computer vision at the low level of the system, which is expected to require a zillion of operations before starting to produce meaningful output, though. Besides CogAlg's code is getting progressively unreadable.

    This is another story, though.

    * B.K. is the creator of "Cognitive Algorithm" project, but I recalled that I first called it with that shorthand "CogAlg" in an e-mail a few years ago, and he adopted it.

    Keywords: Computer Graphics, Computer Vision, Film, Filmmaking, Twenkid FX, Twenkid Studio, Analysis, Art, Programming, CogAlg, Cognitive Algorithm, Sport, Acting, About Tosh, AGI, Animation, Видеообработка, Изкуство, Компютърна графика, Кино, Визуални ефекти, Компютърно зрение, Познавателен алгоритм, КогАлг, УИР, Универсален изкуствен разум, Анимация, ...

    Sunday, October 28, 2018

    ДЗУ Стара Загора - 1985 и 1988 г. - DZU Stara Zagora Documentaries from 1985 and 1988 - Disk Drives and Robots

    Български - по-долу. И два клипа за завода за печатни платки в Русе от 1975 и 1989 г.

    Bulgarian documentaries from 1985 displaying the DZU Stara Zagora factory, producing hard disk drives, floppy drives, industrial robots and other high-tech electronics and electro-mechanical equipment.

    The video below is in English, the rest are in Bulgarian. 

    DZU was  driven to its bankruptcy after Bulgaria was "liberated" from socialism in 1989, and together with the socialism, the country was also "liberated" from its highly developed and high-tech industry and its huge technological potential, being the biggest producer of computer electronics in the Eastern Bloc and having plenty of highly qualified staff.

    In one of the 1985's film they mention that 95% of the production was for export, in the 1988 movie they enumerate 18 countries in 18 continents, including East Europe ones and Finland, Austria, Greece, West Germany, Switzerland, Italy, France, Spain, Netherlands, Nigeria, China, Brazil, India, Iran.

    Note that most of the staff in the clean-rooms and other facilities shown in the videos were women.


    Printed circuit boards in Russe 1989:

    Филми-съкровища, показващи производството в ДЗУ Стара Загора през 1985 г. и през 1988 г.,  едно от най-мощните предприятия (стопанско обединение) на унищожената високотехнологична индустрия на България.

    Впрочем, спомням си, че в една лекция на социолог на технологиите, преподавател в ПУ, в клуб "Нещото" в Пловдив, той беше споменал че технологията за производство на CD-та в ДЗУ е била собствена, разработена през 1987 г. (ако не греша). В скорошно предаване пък друг социолог и преподавател в ПУ след посещение в Южна Корея разказа, че когато ги попитал дали знаят за България, му отвърнали, че знаят всичко за нас и преподавали упадъка на България като отрицателен пример - какво не бива да се прави и как една развита държава може да се върне обратно в блатото.

    Във филма от 1988 г. се показва част от научно-изследователската лаборатория, в която се мерят "разстояния между два атома", с електронни микроскопи и т.н.

    В него от 18-тата минута се споменава, че продукцията се изнася в 18 страни на 4 континента. Има търговски връзки с представителства и сервизи.

    Освен социалистическите от СИВ и Куба, още в:

    ГФР (ФРГ, Западна Германия)

    През средата-края на 90-те години ДЗУ още работеше, имам някъде из къщи брошура от Пловдивския панаир с преносими външни твърди дискове, но вече е отивало към залеза си.  Мисля, че произвеждаха компакт дискове.

    После го купиха от унгарската фирма "Видеотон" и както пише коментатор под видеата, продукцията сега е на много по-ниско технологично ниво.

    Благодаря на X за линковете и на канала "Pod lipite" за качването на клиповете!

    Производство на печатни платки в Русе, 1975 г. Благодаря на Д.

    Thursday, October 4, 2018

    Numpy "fancy indexing" and scan_P_ debug - discussion on the development and debugging of CogAlg from September 2018

    Artificial General Intelligence (AGI) development, debugging, tracking patterns and bugs in Python, trees, tree-traversal, nesting, Pycharm, OpenCV, numpy,  prize for contributions, computer vision.

    Numpy "fancy indexing", conditional indices, iterators #8
    Twenkid opened this issue on Sep 2 · 55 comments
    Scan_P debug #10
    Twenkid opened this issue 12 days ago · 56 comments


    Saturday, September 8, 2018

    Montreal.AI - great source for researchers-oriented news and papers in AI, ANN, Deep Learning

    That's the best I've found so far:*F

    The amount of publications and progress is amazing. My predictions and "complains" about the exhaustion of art and claims that art and practically everything would be soon generated automatically, thus would become less meaningful and valuable,  is already happening practically (not just theoretically) with a shocking speed of progress with the DNN and GAN.

    (Which in their specific implementation and brute force search may not be my choice, though.)

    Overall, "the end is near" and the belated ones would find only bones on the table...

    So, guys, including myself, work harder and get the best computers and colleagues you can!

    Tuesday, July 31, 2018

    Encyclopedia of Human-Computer Interaction - Affordances (?В М д с П)

    I'd recommend this valuable resource, compiled by a number of researchers in the field of HCI.

    It's a good food for thought. The writing may sound too academical, however even if one gets bored or tired by this style, the titles of the chapters themselves and their sequence, the topics and the pictures and tables, the historical exploration of the subjects are suggestive on their own.

    Part of  my way to dig and reflect about the AGI includes a sort of HCI and "design" way of thinking for various reasons, for example because that's how the intelligence is manifested and also how it can be monitored and analyzed, it's also connected with the code synthesis line, see recent post.

    One of the concepts, which immediately connects my AGI approach with HCI is the "affordances", referred to the American psychologist James Gibson, 1977, 1979, presented in the late chapter 44 of the book.

    In my path of thought and study, the concept emerged to me as "What can/could be done" (with a few specifications: 1) what the agent/actor/the will could do itself (and gets aware of as possibilities for action); 2) what could be done anyway in this environment by any possible agent (at maximum resolution of perception and causation), or in my own notation: (?В М д с П, ?В М д П).

    This is kind of"obvious", but when coining terms you can focus on them and make them explicit and distinct.

    ?В М д с П  may be criticized for being too long, why not just "affordances" or "възможности"?

    Because it explicitly suggests other important concepts as distinct elements that can be expressed in executable way:

    1. Search
    2. Possibilities as a set of specific options
    3. Will, actor, agent
    4. An action, acting, change

    Other concepts which I can point at a glance are the visualisations of structure and relations, the "bifocal display" (or multi-focal: different levels of abstraction, different range, different resolution ~ different hierarchical levels of representation or "views"), the way attention travels and how it's attracted and guided when operating an interface, the Gestalt principles. (...)

    The body and the environment could be perceived as "interfaces" in switching contexts, different "applications" and the way they are approached may be generalized.

    Saturday, July 28, 2018

    Rising inequality and AI - a comment on something funny from notes from the AAAI 2018 conference

    Seen in Montreal AI.

    " His Take: When we reach the place where robots do takeover, what do we do? The concern: “those who own the robots rule the world”.

    Traditionalist Response: You see AI robots in the headlines, but not in the productivity or job statistics! Same with computers. Productivity growth in the 2010s is lower than in the last five decades. E/Pop is high, unemployment is low.

    Rising inequality began before AI as a result of
    measured factors: fall of unions, trade immigration. Dave: Wasn’t clear if it was “fall of trade, fall of immigration”, or “trade, immigration, and fall of unions” (my guess is the former)

    But: It’s actually really hard to measure productivity. The nature of productivity changes. If workers are now working more hours and taking longer commutes, it’s different from walking into a building getting clocked and walking out.

    (Bold: T.A.)
    Q: Why should this time be different?
    • Past fears that automation destroys jobs fizzled. FDR blamed the Great Depression joblessness
    on failure to “employ the surplus of our labor which the efficiency of our industrial
    processes has created”
    . US Commission on Automation, Rifkin’s End of Work (1995).

    "Those who own the robots..." - has some options in the cited paper.

    B) What a way to explain the failure of willingness to distribute the share, the consumption or to *create* jobs by changing the rules and re-thinking the way the profit is distributed.

    (Back in the Great depression time there was one famous US politician, a competitor of Roosevelt for the presidential election in 1932?, who had "wrong" ideas and rising popularity and was murdered by the "forces of nature", as one could guess).

    A) That sounds like an explanation for children to me. The neoliberal dogma of "trade" - "trade" blah-blah, "free trade", "the market" deciding everything. "The trade" in abstracto makes no sense, though.

    What about the neoconservatism-neoliberalism political movement in the 70s, the petrol crisis, the "crisis of democracy" in late 60s-70s, Margaret Thatcher in UK and Reagon in the USA.

    The fall (destruction) of USSR and the Eastern Bloc destroyed one of the pressures for the USA/Western Europe systems regarding the laborers.

    What about the transfer of the production lines to East and South East Asia - to a much lesser extent in Central and Eastern Europe, which were "too expensive" for the investors.

    As of the immigration - it's supported by the opening of the borders for more and for cheaper workers (for higher profit), workers who are willing to work for lower wages - the countries' governments are supposed to decide and help or prevent this, it's not a "natural disaster" as it's suggested to children.

    In "democratic" countries those governments are supposed also to ask their citizens as well - I doubt Germans would have agreed with all the immigration they have received from the 40s until the latest decisions of their long-lasting "democratic dictatress".

    The immigration from Eastern Europe to Western and USA came: 1) because of the opened borders (in favor of the business in these countries) and also 2) largely because of the quickly destroyed industries after the "liberation from the socialism", see Bulgaria for example, to what its economy turned - from the biggest producer of computers and electronics in the Eastern Bloc and a huge producer of agricultural goods.

    (Note that the socialism is known as "communism" in Western Europe and USA (the "imperialist-capitalist countries"), although the rule  was never officially called "communism" by the "communists" themselves, except for the name of the parties. ).

    Besides the destroyed industry,  some of the countries national and social structure was smashed by the neoliberal "free" globalized media and political agents/non-governmental organizations applying "ideological diversion".

    That has been erasing the national awareness and belongings of the young people, they feel less attached  to their fatherland.