Thursday, March 31, 2011


IM-CLeVER and iCub - EU Funded Projects - Hierarchical Reinforcement Learning, Abstraction of Sensory Inputs ...

Have you seen this cute little baby robot?

I have heard of it, but checking the aims and basic assumptions of the project from the source pushed me - they're in the right direction and aware of the issues, Juergen Schmidhuber is being one of the leaders.

Another signal that I have to discipline my ass, as well... :)

IM-CLeVeR is supported by the European Commission
under the ‘FP7 Cognitive Systems, Interaction,
and Robotics Initiative’, grant no. 231722.

Start: 01/01/2009 (start of scientific work: 01/05/2009)
End: 30/04/2013
Total duration: 52 months

Total EU Funding: 5.899.884 euros
Total Budget (EU Funding + Cofunding): 7.726.783 euros

Another use of iCub:

Read More

Tuesday, March 29, 2011

// //

Mathematical Theory of Intelligence - Second Course in AGI/UAI at Plovdiv University by Todor Arnaudov (Course Program in English)

Mathematical Theory of Intelligence (AGI/Universal AI)
by Todor Arnaudov

The course was taught to undergraduate students between 1/2011 – 3/2011 at Plovdiv University “Paisii Hilendarski”, Bulgaria, in the Faculty of Mathematics and Informatics. (Originally in Bulgarian, with a lot of additional sugguested materials in English). This was the second AGI/UAI course after "Artificial General Intelligence/Universal Artificial Intelligence" (originally: "Универсален изкуствен разум") from last year; now putting stronger emphasize on the most advanced lectures on theories of intelligence and the common principles, meta-evoltution and Boris Kazachenko's works, now reviewed more thoroughly in class (as far as my understanding and students interest went).

You may find a lot of materials in English and links in this blog. There's somewhat sorted list by topic done for the students, but it might be partially incomplete, because it's not updated immediately with the blog posts. Students are advised to check out the blog for the topics from the first course, which were omitted in the formal program of the second one, such as other AGI researchers work and directions - there was too little time available in class...
I guess the next more updated course is supposed to go even more deeper into formal models, maybe starting with some real basic AGI agents.

I'm preparing to publish slides in English (you can find Bulgarian ones in the course homepage on the top) - especially slides and translation of the old works from my "[Teenage] Theory of Mind/Intelligence and Universe", written between 2001-2004, which years later was how I recognized the "school of thought" I belonged to (see the annotation below).

This course could be taught in English as well, if there's an appropriate demand/place/invitation.


Mathematical Theory of Intelligence
This course is addressed to students who wish to work in the novel interdisciplinary field of Artificial General Intelligence (AGI & UAI) which is building the theoretical foundations and research methodology for the future implementation of self-improving human-level intelligent machines - “thinking machines“ (AI was one of the predecessors of this field, but went into solving too specific problems). The course introduces students to the appropriate foundations in futurology and transhumanism, mathematics, algorithms, developmental psychology and neuroscience in order to finally review some of the current theories and principles of general/universal intelligence from the “school of thought” of researchers such as Jeff Hawkins, Marcus Hutter, Juergen Schmidhuber, Todor Arnaudov and Boris Kazachenko.

Course Program: (as of 11/2010) (Syllabus)

1. What is Universal Artificial Intelligence (UAI, AGI, „Strong AI“, Seed AI). Technological Singularity and Singularity Institute. Transhumanism. Expected computing power of human brain. Attempts for literal simulation of mammalian brain. "Universality paradox" of the brain. Ethical issues, related to AGI.

2. Methodological faults in narrow AI and NLP (Natural Language Processing), reasons for their limited success and limited potential. Review of the history of approaches in (narrow) AI and its failures and achievements up to nowadays. Concepts from AI that are prospective and still alive in AGI, such as probabilistic algorithms, cognitive architectures, multi-agent systems.

3. Mathematics for UAI/AGI: Complexity and information theory. Probability Theory – statistical (empirical) probability. Turing Machine. Chaos Theory. Systems Theory. Emergent functions and behavior. Universe as a computer – digital physics. Algorithmic Probability. Kolmogorov's Complexity and Minimum Message Length. Occam's Razor.

4. Introduction to Machine Learning. Markov's Chains. Hidden Markov Models (HMM). Bayes Networks. Hierarchical Bayes' Networks and Hierarchical HMM. Principles of the algorithms of Viterbi and Baum-Welch (Expectation-Maximization). Prediction as one of the basis of intelligence.

5. Drives of human behavior - behaviorism. Classical conditioning. Operant Conditioning and reinforcement learning as universal learning method for humans and machines. Why imitation and supervised learning are also required for AGI.

6. Introduction to Developmental Psychology (Child Psychology). Stages in cognitive development according to Piaget, and opposing views. First language acquisition. Nature or Nurture issues and how specific cognitive properties, behavior and functions could emerge from a general system.

7. What is intelligence? Thorough review of Marcus Hutter's and Shane Legg's paper “Universal Intelligence: A Definition of Machine Intelligence”. Universal Intelligence of an agent as a capability to predict sequences of actions with maximum cumulative reward. Types of agents in environments of different complexity.

8. Beauty and Creativity as compression ratio progress in the work of Jurgen Schmidhuber.

9. Brain Architecture – functional anatomy of mammalian and human brain. Triune theory - evolution of vertebrate's brain. Neurotransmitters and hormones and their relations to emotions and behavior. Mini-column hypothesis and functional mapping of the neocortex. Attempts for biologically correct simulations of the neocortex such as the BlueBrain project.

10. Evolution in biological, cybernetical and abstract sense: genetic, epigenetic, memetic and its application in design of complex self-organizing systems. Review of Boris Kazachenko's work on meta-evolution as Abstraction of a conserved core from its environment, via mediation of impacts & responses by increasingly differentiated adaptive interface hierarchy. Entropy as equation and increase of order, not increase of chaos.

11. Introduction to the theory of Intelligence by Jeff Hawkins. Modeling the function of human neocortex – the Memory-Prediction Framework and the principles of operation of the Hierarchical Temporal Memory.

12. Introduction to the theory of Intelligence by Todor Arnaudov – mind as a hierarchical system of simulators of virtual universes, that predict expected sensory inputs at different levels of abstraction. Hierarchical prediction/causation of maximum expected reward, where correctness of prediction/causation is rewarding. The Universe as a computer and trend in the evolution of Universe (cybernetical evolution). Proposal for guided functional simulation of the evolution of vertebrates' brain, starting by a general cognitive module that is simpler than mini-column.

13. Theoretical Methodology of Boris Kazachenko. Generalists and specialists, generality vs novelty seeking. ADHD and ASD. Attention, concentration, distractions and avoiding them. Induction vs deduction.

14. Introduction to the theory of intelligence by Boris Kazachenko. Cognition: hierarchically selective pattern recognition & projection. Scalable learning as hierarchical pattern discovery by comparison-projection operations over ever greater generality, resolution and scope of inputs. Importance of the universal criterion for incremental self-improvement. Comparisons of greater power and resulting derivatives, and iterative meta-syntax expansion as means to increase resolution and generality. Boris Kazachenko's Prize for ideas.

15. Summary of the principles of general intelligence in the works of Jeff Hawkins, Marcus Hutter, Juergen Schmidhuber, Todor Arnaudov and Boris Kazachenko: incremental [hierarchical] accumulation of complexity, compression, prediction, abstraction/generalization from sensory inputs. Evidences and real-life examples for the reliability of this principles.

16. Practice in introspection and generalization. Expansion of the scope of cases, where cognitive algorithm is applicable.

17. Exam.

Update from 29/11/2011: Comments on the AGI email list of AGIRI:
John G. Rose ... via to AGI show details Nov 24 (5 days ago) Great course programs covering AGI summary/introduction, I like the selection of topics discussed. You might consider opening these up online via streaming/collaboration in the future… John ...

Ben Goertzel ... via to AGI, AGI show details Nov 28 (2 days ago)
Looks like a great course you're offering!n FYI ... On your page you note that our 2009 AGI summer school didn't cover jeff Hawkins' work... I can't remember if any speaker mentioned hawkins, but, Allan combs gave some great lectures on neuroscience, which covered hierarchical processing in visual cortex among other topics ;) That AGI summer school presented a variety of perspectives, it wasn't just about open cog and my own views ... But it wasn't heavy on perception-centered AGI... Ben ...

Todor Arnaudov's answers: Thanks, John.

There are materials from the course online (on the blog and on the site); most of the lecture slides and details are only in Bulgarian yet, though. As of collaboration - maybe, as long as I manage to create a team, for the moment I prefer keeping the authorship for myself. ...

Thanks Ben! And thanks for the notes. :)

Ben>That AGI summer school presented a variety of perspectives, it wasn't just about open cog and Ben>my own views ... But it wasn't heavy on perception-centered AGI...

All I knew about the summer school was from the brief web page on your site: Hawkins wasn't mentioned in the program, and it sounded reasonable not to be, as he seemed from a distant "school of thought" compared to the lecturers' ones - as far as I knew or assumed theirs.

Ben>I can't remember if any speaker mentioned hawkins, but, Allan combs gave some great lectures on Ben>neuroscience, which covered hierarchical processing in visual cortex among other topics ;)

That's nice (I've noticed neuroscience in the program), but anyway I think HTM and the other sensorimotor topics are more general - memory-prediction framework and the other similar models are supposed/aiming to explain virtually all kinds of cognitive processes with an integral paradigm, and vision is just an example/case. In a POV of schools, there's a distinction whether it's suggested that vision is an example of a general framework, or it's one of the sub-architectures/sub-frameworks for an AGI.
Read More

Thursday, March 24, 2011

// // Leave a Comment

Universal Artificial Intelligence by M. Hutter, Neocortical Mini-Columns in Reinforcement Learning Context, Creativity & Virtual Worlds - at AGI 2010

Selection of recommended talks on AGI 2010 conference:

Marcus Hutter - Universal Artificial Intelligence, AGI 2010

Tutorial on Mini-Column Hypothesis in the Context of Neural Mechanisms of Reinforcement Learning - by Randal A. Koene, AGI 2010

Related to M. Hutter - Jurgen Schmidhuber notices that reinforcement learning needs many steps, many decision points, and marks compression progress as an abstract form of reward for cognitive processes:

Jurgen Schmidhuber-Artificial Scientists Artists Based on the Formal Theory of Creativity, AGI 2010

The following one is mostly to get familiar about the state of the art of Virtual Worlds simulations for AGI - it seems pretty primitive... Taking into account also the demos with reinforcement learning agents in other talks, where the agents play simple games such as Pacman, "Pocman" (partially-observable environment), tic-tac-toe or tank games, or the dogs in Ben Goertzel's demo which learn to carry the object back to their master (coordinates of a bounding rectangle)...

I've been speculating about different virtual world systems for AGI seed AI and other intelligent agents experiments myself, but it was some 6 years ago; wished to start experimenting in formalizing so called my "Teenage Theory of Mind and Universe" and testing it in such worlds, but it was for a short time, and it has left as just ideas because of all the other stuff I had to deal with. However I am back, and certainly there's a lot of work to be done in this field.

Ben Goertzel - Using Virtual Agents and Physical Robots for AGI Research

Some funny closing remarks by Marcus and Ben:

Keywords: neural, mini-column, universal, artificial, intelligence, AGI, UAI, УИР, agents, physics, simulations, reinforcement, learning, AIXI, virtual, worlds, simulators

Read More

Wednesday, March 23, 2011

// // Leave a Comment

News: Official AGI Journal and Independent AGI E-zine

I've been considering founding a sort of independent journal for AGI, where to give some more "formal" shape of works of Boris Kazachenko, myself and other independent researchers, it might be called just "e-zine". [ It was "formally declared" a month later here. ]

It seems there is an official Journal created already with a solid Editorial Board.

However it got a problem - it's not allowed to submit already published stuff. There are some AGI ideas and suggestions published or shared on-line by independent researchers many years before AGI world conferences have started to be organized (2008), in late 90-ies and the 2000s, such as Boris' ones. They are original, yet probably widely unknown by the official rulers of the field (I don't know for sure, I've noticed just a short on-line dialog between Boris and Ben Goertzel, which has happened some 8 years ago).

Edit: Further, there are a lot of AI-niks who have put on new shoes and have high positions, watching for high "scientific" standards in new publications. Unfortunately in this field being "scientific" often actually means being a very pedantic quoter, and ones who don't have the patience to write half of the paper with citations are not allowed to enter the "high-life" club of being "scientific" - I do emphasize this is about the field of AI, Check out: (heck What's Wrong With NLP [and AI]). AI is not really a science in the sense of Physics, Chemistry and Biology.
Read More

Monday, March 14, 2011

// // Leave a Comment

HyperNEAT in Neural Networks and Ontologies in NLP - Why They Seem Promising?

1. HyperNEAT

Exerpts from the site (bold - mine):

"In short, HyperNEAT is based on a theory of representation that hypothesizes that a good representation for an artificial neural network should be able to describe its pattern of connectivity compactly.

This kind of description is called an encoding. The encoding in HyperNEAT, called compositional pattern producing networks, is designed to represent patterns with regularities such as symmetry, repetition, and repetition with variation.


The other unique and important facet of HyperNEAT is that it actually sees the geometry of the problem domain. (...) To put it more technically, HyperNEAT computes the connectivity of its neural networks as a function of their geometry.


NEAT stands for NeuroEvolution of Augmenting Topologies. It is a method for evolving artificial neural networks with an evolutionary algorithm. NEAT implements the idea that it is most effective to start evolution with small, simple networks and allow them to become increasingly complex over generations. That way, just as organisms in nature increased in complexity since the first cell, so do neural networks in NEAT. This process of continual elaboration allows finding highly sophisticated and complex neural networks."


That is:

- Compression/Minimal message length

- Repetition as a clue for patterns (symmetry is repetition as well)
- Incrementing (small scale to big scale)

- Coordinates (topology in connectivy)

2. Ontologies in NLP/Computational Linguistics

Basically this is a semantic network, i.e. relations between concepts. WordNet is a sort of ontology. The issue is that they are often designed by hand. There are statistical methods, as well, but they're missing something I've mentioned many times in the series What's Wrong With NLP.

Why this happens to be useful?

- Because it resembles real cognitive hierarchy - it's "skeleton hierarchy"

Accordingly, they're prone to be too rigid and unable to self-extend.

Read More

Wednesday, March 9, 2011


Proposal for Directed/Guided Evolution of a Cognitive Module/Algorithm by Step-by-Step Modification of a Basic One - from Archicortex to Neocortex

This is a direction I realized last year during a discussion on Boris' knols and mentioned there, but later I shortened the comment there, because it wasn't the appropriate place for the details

The idea is about designing cognitive algorithm achieving properties that Boris proposes, however grounding it and deriving it on a supposedly simpler and easier to understand cognitive algorithm that has existed before in lower species and was slightly modified by evolution.

Keywords: embryology, comparative neurobiology, embryogenesis, vertebrates brain evolution, cognitive module, cognitive algorithm, evolution, archicortex, neocortex, forebrain, hippocampus, mini-column, columnar organization, generalization, recording, prediction, scaling, differentiation, specialization


- Embryogenesis is selective segmentation and differentiation

In general, organisms are deveolped by selective segmentation (separation) and differentiation of cells, a sequence of activation of appropriate genes.

- Small quantity of germ cells divide to form bulky tissues/regions - initial complexity is much lower than final and there are interdependencies. Simple mathematical example is fractals .

One reason neocortex may have relatively similar columns all over might be because they might be building block of the cognitive algorithm. However another reason, in another POV is that DNA has just not enough capacity to code complex explicit circuitry to make them all specialized by directed growing. Even if it had the capacity in theory, it's questionable whether biological "technology" would be capable to connect it with the required precision, because organism parts "grow like branches of a tree" ("The Man and The Thinking Machine", T.A. 2001) .

Bottom line: there are "leaves" of the tree, and the complexity of the leaves is limited.

- Evolution steps in phylogeny are supposed to be very small, and genome development is chaotic in mathematical sense - a small difference in the initial state (DNA) may lead to (apparently) vast difference in the final state - fully developed body.

Apparently big differences in structure may be caused by very small and elegant, functionally purposeful changes inside.

- Some of the operations that a mutation may cause could be, besides formation of a new protein: be or result in something like the following:

- Copy a segment (a block) once more, i.e. initiating division once more cycle

- Connect to another segmentation module (especially in brains)

- Amphibian's and Reptilian's forebrain, their most evolved part - archicortex/neopalium - has 3 layers. In comparison, general mammalian and human's most evolved (the external) part - the neocortex has 6 layers*

- Evolution, especially in brain, is mostly building "add-ons" , "patches" and slight modification and then multiplication of components(?)

Triune theory of brain, the new is a layer above, the old is preserved. The new modules are connected back to the old ones and have to coordinate their operation, and new modules receive projection from the previous. I think this implies also, that the higher layer should be "smarter" (more complex/higher capacity memory/processing power) than the lower, allowing more complex behavior/adaptation - otherwise it would just copy the lower layer results.

Amphibian's and reptilian's brains had cortex lacking 6-layer columnar structure of mammals, it's 3-layer (I don't know a lot about its cytoarchitecture yet). I couldn't accept that archicortex lacks some sort of a modular design, somewhat similar to the columns; it makes no sense for the archicortex to have been a random jelly of neurons, because even basic behaviors such as finding lair and running for cover require integration of multimodal information and memory. I don't believe also that mini-columns had appeared from scratch in the higher mammals.

Recently a little support on this speculation appeared; regarding birds, though, a parallel line of evolution:

From: Our brains are more like birds' than we thought: "

"...A new study, however, by researchers at the University of California, San Diego School of Medicine finds that a comparable region in the brains of chickens concerned with analyzing auditory inputs is constructed similarly to that of mammals.


But this kind of thinking presented a serious problem for neurobiologists trying to figure out the evolutionary origins of the mammalian cortex, he said. Namely, where did all of that complex circuitry come from and when did it first evolve?

Karten's research supplies the beginnings of an answer: From an ancestor common to both mammals and birds that dates back at least 300 million years.

The new research has contemporary, practical import as well, said Karten. The similarity between mammalian and avian cortices adds support to the utility of birds as suitable animal models in diverse brain studies.

"Studies indicate that the computational microcircuits underlying complex behaviors are common to many vertebrates," Karten said. "This work supports the growing recognition of the stability of circuits during evolution and the role of the genome in producing stable patterns. The question may now shift from the origins of the mammalian cortex to asking about the changes that occur in the final patterning of the cortex during development.

- The function of the Archicortex (hippocampus) in mammals is declarative memory and navigation.

See some of my speculations on: April 24, 2010 - Learned or Innate? Nature or Nurture? Speculations of how a mind can grasp on its own: animate/inanimate objects, face recognition, language...

- formation of long term memory
- navigation
- head direction cells
- spatial view cells
- place cells

At least several or even all of these can be generalized. Places and navigation go together. Places are long-term memories of static immovable inanimate objects (the agent has not experiences that these entities move).

Navigation, head-direction, spatial-view, place-cells - they all are a set of correlations found between motor and sensory information, and long-term memories, which are invoked by the ongoing motor and sensory patterns.

The static immovable inanimate objects (places) change - they translate/rotate etc. - most rapidly in a correlation with head direction (position) and head movements.

Navigation and spatial view are derived from all.

Boris Kazachenko's comment:

(...) Regarding hippocampus, it controls formation of all declarative (not long-term) memories, not just about places. Declarative means the ones that got transfered high enough into association cortices to be consciously accessible.
My personal guess is that hippocampus facilitates such transfer by associating memories with important locations [mapping] . You'll pay a lot more attention to something that happened in your bedroom then to the same thing that happened on the dark side of the moon. I call it "conditioning by spatial association". (...)

// There's a whole topic about hippocampus functions and its competition with neocortex, for now I plan to put it alone and link to this.

Reptiles don't have association cortices, though, yet pretty impressive behavior of lizzards could be seen, such as this curious iguana looking behind the mirror to see where the other one is, and eventually hitting the mirror - see at 5:33.

My guess about archicortex' contribution is discovery of means to:

- Archicortex maybe records exact memories and correlations between memories/compare for match between sequences of sensory patterns

There should be limitations of the length of sequences, part of it might be caused by size constraints - animals with archicortex only, lacking the higher layers* just have very small brains. (*Cingulate cortex and neocortex for mammals)

I'm not an expert in vertebrate embryology yet, but I guess a simple reason
why fish and reptiles with big body keep very small brains - like 3,6 m white shark with 35 g of brains should be that:

- Germ cells that give birth of brain tissue of fish and reptiles divide less, or/and these species lack some hormonal growing mechanisms that species with bigger brains have

Both are a sort of "scaling issues".

Too small a brain has insufficient cognitive resources. On the other hand, these brains maybe don't scale also, because they wouldn't/didn't work better if they are/were bigger.

- Assuming general intelligence is a capability for ever higher generalization, expressed in a cognitive hierarchy (see J.Hawkins, B. Kazachenko, T. Arnaudov) and mini-column is assumed to be the building block of this process in neocortex, there should be a plausible explanation of why and how this module was formed and why this function gets successful

My functional explanation is the following:

- There already existed templates of circuits for exact recording, but they didn't scale
- The simplest form of generalization is recording at lower resolution than the input, and fuzzy comparison. It's partially inherited by the imprecise biology.
- Updated form of these circuits maybe added more divisions and cascade connections (and this may have started in cingulate cortex or higher reptiles, as well) which allowed for hierarchical scaling. Neocortex is assumed to have 6 layers, archicortex has 3. I'm not an expert in cytoarthitecture, and should check out cingulate cortex, but if there are no inter-stages between 3 and 6, this sounds suspicious for simple doubling somewhere during division and specialization. Or it could be several doubling operations.
- These new cascade connections allow for deeper hierarchy, scaling and multi-stage generalization. (Recording "exactly" alone is "generalization" and lossy, but without hierarchy which is deep enough this cannot go far - just for coping with basic noise.)
- There are mice with less than 1 g of brain which of course are much smarter than sharks (not to mention smart birds); however the advantage in micro-structure (mini-column) doesn't deny that mammalian brain scales in size and there is a correlation between brain size (cognitive reources) and intelligence, even though it's not a straight line. Spindle neurons, connecting directly distant regions in the neocortex are one of my guesses about why pure size might be not enough; another one is the area of the primary cortices, especially somatosensory (elephants, dolphins and whales have bigger brains than humans). See Boris' article about Spindle neurons and generalization:

- Neocortex does scale, but it's not surprising that it has constructive limitations as well as archicortex did.

- Classical and Operant conditioning, dopamine and temporal difference learning

It's quite a global feature of entire brain, maybe; but it has to be considered - classical evolving to Operant requires predictive processing.


1. Design a basic cognitive algorithm/module, scalable by biologically-like mechanisms, which allows reaching for, say, reptilian behavior.
2. Tune this basic module, multiply and connect intentionally to form a mechanism that "stacks" on a hierarchy, generalizes and scale the global cognitive capacity.
Read More