THE SACRED COMPUTER - СВЕЩЕНИЯТ СМЕТАЧ, a.k.a. ARTIFICIAL MIND - A Research Institute for Artificial General Intelligence, Cosmism and Transhumanism, AI, Software, Research, Creativity, Versatility, (...), Being a Universal Man etc. Created by Todor Arnaudov in 2000 as "The Sacred Computer" e-zine. Author of the visionary AGI research strategy in 2003, the world's first university course in Artificial General Intelligence (Plovdiv 2010, 2011) etc.
That's my latest artistically shot and edited funny documentary, edited and composited with my own software.
It's still an in-house prototype and I've not being working on it much though - the project was totally frozen for 4+ months since the previous movie in the summer, I added a few new functions lately for this one.
Audio: Bulgarian
Original post and info (Bulgarian)
"The debate" - should an AGI be embodied or not. I participated in a discussion on sensorimotor hierarchies a few months ago in the AGI list, but I'm lazy to digest it here right now (yet). I'll reword one POV of the essence which I thought of recently, though. "Body" and "embodiment" is actually about:
- Known fixed basic continuous coordinate spaces - that simplifies search and basic operations and sets one of the basic components of the inputs and their initial "labels" - coordinates. The simplest use of this is that mind can recognize where the activity is and relate coordinates of inputs to each other.
- Interactivity - input and output, includes changes of coordinates as smooth as possible in time-space - allows for as general as possible inputs and outputs (actions).
- Modalities - it's about dealing with different types of lowest level (for the system's sensors) input from the environment, with the "physics" - for constructive reasons and more general input. Each system should
eventually
deal with some kind of lowest level input and output, the more general it is, the more general intelligence would be.
- Inter-modalities - to relate and find models between them.
- Value-free initial inputs - initially sensory inputs and motor outputs don't have meanings, except when related to:
- Basic reward-system - that's in order to keep system's integrity in the "physical" world where it exists and to avoid self-destruction.
- Orientation/position of muscles/bones. - Coordinates in sensory matrices: - Visual coordinate on the retina. - Coordinate of a tactile input on the body. - Pitch of the sound. - Particluar tastes or smells.
A bit higher level, but still low:- Spatial coordinates of the body within environment (probably related to hippocampus, "place cells", head orientation cells) - These are direct inputs to brain.
Selection, combination and generalization of them in space and time forms higher level coordinates/addresses. The patterns/concepts at different levels of abstraction in the cognitive hierarchy have higher level addresses, where higher level addresses and patterns are harder and slower to access and operate and require more context- and attention-switching.
The phenomenon is displayed for example in:
1. Using multiple monitors and looking right-left is generally more convenient than switching between two virtual desktops with a keyor worse - menus
Unless you make a visual comparison between the images on the two monitors, then key-switching is more convenient; but for displaying different data, physical coordinates seem better.
2. Having a "to do list" or whatever materials for look up on a paper/notepad next to the monitor/out of the computer is more convenient for mind to switch focus.
Looking aside of the monitor or turning around feels like a "soft context/attention-switch", you don't get as distracted as you would if you have to travel through menus, open an organizer program, check "month, date, priority....". Take for example big paper posters like ones for conferences - you just have to glance it to find what you need. These are different spatial (physical) coordinates of entities as well.
However if the data from different contexts is on the same screen/physical address, that requires an additional abstraction/parameters to mark the different contexts.
3. Chains of intentional high-level operations needed to access abstract addresses impede processing and are more distracting than a single or chain of low level operations such as adjusting muscles/orientation or changing physical location
For example, if you have to open one place, to search there, then find a key to search for another location etc. (either virtual or physical locations).
Intentional actions involve the longest chains of access of different levels of generalizations. If many actions are needed, this implies it takes more time and it's like "crawling" the hierarchy and sweeping the buffers, then it of course will cause slower context-switch back.
4. Orders of patterns associated with flat numbers are easier to remember than if associated with abstract unrelated names
IMO "A" in AGI or "AI" is a redundant word, and what's more important in POV are the other terms.
- General Intelligence (GI)
- can be applied to humans as well
- Self-Improving General Intelligence (SIGI)
- Self-Improving Generally Intelligent Mind (SIGIM) - Self-Improving Generally Intelligent Machine (SIGIM)
(SIGI & SIGIM are interchangeable; SIGI & GI can be applied to humans as well)
...
- Универсален разум (УР) или само "разум" - може да се прилага и за хора, и за машини.
- Самоусъвършенстващ се универсален Разум (СУР) - може да се прилага и за хора, и за машини.
- Самоусъвършенстваща се универсална машина (СУМ).
- Самоусъвършенстваща се универсална мислеща машина (СУММ).
Смисълът "универсална" се подразбира, дори и ако "универсална" формално се пропусне.
...
Self-Improving/Самоусъвършенстващ се means in the most general case in POV of B. K an mine:
- Improve precision and/or scope of predictions of future inputs based on past inputs.
For a true SIGI the process of self-improving should be as general as possible and as scalable as possible, keeping the trend for as long as possible, starting from raw intrinsically meaningless sensory input.
There's no "magic" here, unlike some fake AGI researchers assume there should be.
I recommend all of his talks for inspiration and fun. He's a hybrid of an ingenious physicist and a very talented actor and comedian - reminds me of Robert De Niro and Al Pacino; it's something about their common New York accent. Richard Feynman, the Nobel, 'It's a pain in the neck.'
Richard Feynman - The Character of Physical Law - Part 7 Seeking New Laws (full version)
Feynman on Scientific Method. (Related closely to General Intelligence principles in AGI)
Richard Feynman - The Distinction of Past and Future.
BTW, a POV to irreversability of mine -- it's because of the bad communication between particles. The smallest particles/details, whatever their nature is, are correlated too much and are harder to be controlled by an external structure than to do what's already "encoded". It's hard to cause an action without causing by-effects which are uncontrollable or undesirable, but also unavoidable - that's related to the "quantum uncertainty".
The smallest particles are what drives causation, and they are supposed to have the smallest equivalents of computing power and memory compared to bigger particles; that's why their "behavior" is the simplest, have the shortest scope and seems "chaotic".
Feynman 'Fun to Imagine' 1: Jiggling Atoms
... All of the series ...
Feynman 'Fun to Imagine' 4: Magnets (and 'Why?' questions...)
Richard Feynman - The Character of Physical Law - Part 6 Probability and Uncertainty (full version)
Thanks to Alexander for sharing a link to some of Feynman's talks.
Welcome to the updated web site of "Twenkid Research" (Alpha) and to the AGI Forum of the Independent Scalable AGI Society which opened for participants, but it's "Alpha" as well, need to be filled with some more pinned topics, information and links in the sections - I prepared plenty of them.
Update: It doesn't work anymore. May be opened in the future. It seemed that some DNS were hacked and point to a forum that doesn't have anything to do with thinking machines...
Enjoy
...
Колеги, заповядайте във форум по Универсален изкуствен разум (универсален изкуствен интелект, "силно направление" в ИИ, мислещи машини) на Обществото на независими изследователи, създаден от Тодор Арнаудов - Тош. Включва подфоруми в многобройни свръзани области и подфорум на български език.
Recently I realized how the conservative culture in Academia has emerged - somebody got pissed off of bullshit in open discussions with no moderation, bad selection and low standard for the participants in the discussions. Unfortunately this shit happens on a regular basis in the AGIRI AGI list. Some people has noticed it, but they have given up to do anything about it.
Sorry!My solution is a moderated forum with higher standards and requirements, starting with invitations to participants.
Technically the forum is already created, but it's still hidden and under-construction/testing. Participants will be invited to join a bit later.
... Update, 28/10/2011: The forum has opened for participants (Alpha Version)
Update: It doesn't work anymore. May be reopened in the future. It seems that some DNS were hacked and point to a forum that doesn't have anything to do with thinking machines... :))
Regarding the Independent Scalable Artificial General Intelligence Society:
1. Higher-to-Lower level feedback is less efficient than Lower-To-Higher level feed-forward generalization.
Example: Image/Object Recognition vs Image/Object Rendering, performed by humans.
Every healthy child can recognize human faces, understand emotions and react accordingly, but it's not that easy when intentions are involved - feedback/output to act over environment. Realistic drawing or painting of faces or good acting in a film require time, talent and practice.*
(*I guess I can get the critics that regarding painting/drawing it's just "precision vs scope", some autistic people are great in copying inputs. However drawing by memory and creative drawing of imaginary subjects require both scope and precision and capability to keep consistent mapping between all the levels, from the highest to the lowest.)
2. Generalization out of specifics (rich sensory input) is simpler/more efficient than specification down from generalizations ("decompression")
Generalization is selective lossy compression. Decompression requires reconstruction of the lost data, or requires that data is preserved or the knowledge how to extract it from external memory and incorporate it is kept. Here is the point of "precision vs depth", however I'll emphasize that sometimes a lot of both is needed.
Besides, I suspect depth has more severe limitations than precision for the lower levels. The cognitive hierarchy can add generalization levels at the expense of wider scope of input data and/or lower detail, but these both are problematic - if the scope is extended at the expense of detail (to keep computational complexity under control), then generalization levels will run out shortly, because there won't be meaningful details remaining. On the other hand, if the scope is extended with more modest detail lost, then learning will go computationally out of control.
We don't know where a machine can go in generalization levels with more computing power, but I think brain is very limited.
3. Higher level sensory inputs have less of impact over the cognitive hierarchy, because the feedback is less efficient than feed-forward.
That's the reason why captions "Smoking kills" or "Speeding kills" usually have no effect to make people stop smoking or stop speeding and why first-hand experience - to see with your own eyes, to hear with your own ears and to touch with your own fingers - have more dramatic effect in transmitting any message, than relayed experience of others.
Seeing your friend smoker dying of lung cancer or seeing your friend smashed in his car because he drove drunk - that's a pretty different sensory input - and not only because of your personal involvement with the sufferers.
Text is too abstract and distant - the low level physical representation of text is meaningless, it serves only to encode a higher level representation - that's where the input starts having a meaningful impact to the brain, and the message has to go down in the hierarchy to have actual impact on the behavior.
Another example is acting and film. Film as media demands providing motion pictures and rich sound - the physics of the action at the lowest level possible with a lot of details/high resolution of the input. If there's too much of a dialogue and too much of self-explanations and declarations by the characters, especially of obvious things - then brain is fed with high level generalized input, it can't activate the lower levels from the top-down, and they're idle or "bored". On the other hand if the lowest sensory input is rich, brain can induce up generalizations and engage the entire hierarchy.(Besides the effects of the balance of unpredictability etc., see Schmidhuber's works on Creativity)
4. Rationalization is playing with high-level patterns to explain lower level patterns, which the higher level cannot access, because of the fact that feedback is worse than feedforward, or because lower level patterns are unknown
Higher levels in the cognitive hierarchy are derivatives of the lower levels - the lower levels induce the higher ones, not vice verse. In a sense (a bit simplified), higher level patterns in the cognitive hierarchy are a delayed expressions of the lower level ones. However once a higher level emerges out of the stable regularities in the lower level, it starts to mess with the lower level business - adjusting lower level input, selecting data to keep attention on; adjusting coordinates, resolution, location; and the higher level does in order to maximize its own "success" - match, prediction, reward.
Higher levels usually cannot explain and trace back how they are created, what their lower level patterns are and what are the lower level drives.
Similar situation is with bad philosophy and other fields* where lower level conceptualization, patterns and input are wanted for a conceptual progress, but practitioners deny it and keep blah-blah-ing with concepts which are too high a level, too general, too unrelated to the problem they're trying to solve.
That's also one of the reason for researchers such as Boris Kazachenko and myself to suggest bottom-up approach - it allows for the maximum possible abstraction, while keeping maximum possible resolution and keeping the traces of the abstraction.
*Search the blog with "What's Wrong with Natural Language Processing
5. There are Two Reward Systems which are Messed Up
There's another issue - two reward systems run in parallel in brain. A cognitive and a physical. Cognitive system aims at maximizing predicted match of pure data, while the physical system aims at maximizing desired match - input sensations must match hardwired target sensations loaded with value - food, warmth, water, sex etc. The physical is way more primitive and crude, it relies a lot on the more primitive brain areas and on dopamine and other neurotransmitters/neuromodulators/hormones, while cognitive system is based on finer processing, even though the former participate also. Both systems interact and overlap, the physical system can override the cognitive and make it a slave - for example higher level cognition of drug addicts is a slave of the primitive need to take the drug. Generally these systems are messed up and tangled, so it's hard to trace where starts which in real behavioral record.
6. Rationalization is also explaining physical motivation with cognitive means
Ask somebody why she loves her boyfriend. She's likely to tell you "because he's smart, funny, kind, blah, blah and because he'so soooo blah!", while the real reason is much simpler - the way he makes her feel. That's why she loves him, where "to love" has also more basic meaning than the societal one - it's about quantities of neurotransmitters and about "imprinting" of addiction-cycles of generating such neurotransmitters by physical-cognitive conditioning, inter-association.
It's true that the abstract reasons do play certain role in creating the inter-associations between physical and cognitive sensations - everyone has some preferences and favorites, - however this can be reduced to:
- I love him because he's the type I wanted him to be!
- I love him, because he's my perfect match!
- He's the best match I could find so far...
This is a match between desired and input, which is the type of match of the physical reward system - apparently this selection is driven by the physical system, overriding the cognitive.
That's why I think the abstract reasons of "smart, funny..." are "rationalizations", but rationalization is not strange at all. Everybody would say "yeah, socially acceptable explanations", but that's a cheap answer.
There's one more appropriate reason for rationalization - it's the cognitive system which is asked the question (it's asked in a natural language); the highest levels in the cognitive hierarchy are ruling this area, and yes - the society has taught this higher cognitive system how it should act in such situations etc.
If you ask the question in a lower level language, the answer is different - that's her body language and her behavior when she's with her boyfriend alone, when they're kissing and making love. The answers are also in the amount of oxytocin, dopamine and other chemicals in her brain, the release of which is conditioned with the certain cognitive patterns initially generated by perceiving her beloved one.
In general, for love and attraction it's true by the definition of "emotions" that the physical reward system kicks in. One may be "just a friend" with someone because he's "smart, funny, blah-blah", but even then if one is a human being with a brain in tact (not a sociopath/psychopath), his friendship relations would be messed up with the physical reward systems - emotions, crude emission of certain chemicals and activations of primitive brain areas, which is associated/recorded/conditioned with cognitive patterns, and both are inter-twined.
Purely cognitive "friendship" is business and if it's such, it's not really a friendship.
*Higher/Lower level drives - Passionate feelings demand for "lower" drives, where "lower" in this context has different meaning than lower in the cognitive hierarchy. The meaning here is driven by evolutionary and physically "lower" brain modules, which maps to areas different than the neocortex, archicortex (hippocampus) and thalamus. However physical reward system is not in the same hierarchy as the cognitive system and the levels regarding tracing back, cognitive and physical reward systems are entangled and the physical system can manipulate and "short circuit" all levels of the cognitive hierarchy.
Continues... - On the apparent inconsistency of the goals of a system with cognitive hierarchy and More on rationalization and the confusions of the higher levels in the cognitive hierarchy.
When the news was positive, all people had more activity in the brain's frontal lobes, which are associated with processing errors. With negative information, the most optimistic people had the least activity in the frontal lobes, while the least optimistic had the most. It suggests the brain is picking and choosing which evidence to listen to.
Interpretation of mine: frontal lobe is supposed to be the highest level of processing, which includes the top of the iceberg of conscious data (reflectively accessible), which allows to see that it's about "errors". Lower levels are also about "errors". However if the errors (mispredictions) are too high at lower level (perceived differs expected), this perceptions may not elevate up, they're "meaningless" at conscious level.
- The most optimistic subjects expect positive data, therefore negative data is misprediction/mistake and it's cut before reaching highest levels.
- The least optimistic ones expect negative data, and such evidence is a match to prediction, so the evidence is processed at the highest level. I suspect this may imply that the reason for data to be cut is prior to the frontal lobes. It demands expected outcomes, but it's developed to want them from the machinery before it.
This research reminds me Natalie Portman's famous paper (sure, famous for her fans): http://mindhacks.com/2007/06/18/natalie-portman-cognitive-neuroscientist/
- Infants who understand object permanence (searching a toy which is covered under a cloth in front of their sight) are measured to have increased activity in their frontal lobe after the toy is covered, while the ones who don't understand the object is still there (and don't search for it) display a decrease in frontal-lobe activity.
This is a crude measure, but I guess: The highest levels are expecting that the object should be under the cloth, so they are working actively - sending feedback down to the lower ones to find out where the object is and to adjust input so that they get confirmation of that higher level hypothesis.The frontal lobe of infants prior the understanding is less activated, perhaps because it doesn't have predictions to compare with lower level. Lower level processing dominates and the patterns in frontal lobe are too noisy or lacking.
"Dr Chris Chambers, neuroscientist from the University of Cardiff, said: "It's very cool, a very elegant piece of work and fascinating.
"For me, this work highlights something that is becoming increasingly apparent in neuroscience, that a major part of brain function in decision-making is the testing of predictions against reality - in essence all people are 'scientists'."
Good morning, you finally noticed...
To be continued... --> On Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies
Impressive, I need something like the Pranav's devices for myself.
Research Assistant
That reminds me of a project of mine called. Well, a secret. :P It was/is supposed to integrate a lot of tools in an intelligent way and to boost your performance, saving you all kinds of labor intensive tasks, and it was supposed to monitor your actions and behavior.
I have thought of developing a part of it as MS thesis in early 2008, such as marking of important paragraphs and pages from a book/paper with a simple gesture while you're reading (draw a line with your finger on the side of the paper), then storing the selected parts as images on the computer. OCR would assist in faster classification and for search, but it was supposed to be not robust and was not critical, even without it that's very useful for collecting excerpts/cites from philosophy books, newspapers and magazines, promotion booklets etc., in order to perform "batch processing later" without switching your attention. It saves time and distractions.
The best would be if you could have a head-mounted camera, but more realistically for the experiment seemed a fixed camera, where the book/paper would be set on the table, the video - recorded on a camera at 640x480x15 or 30 fps, and then processed off-line.
One of the technical issues was the resolution - I got 640x480x30 of crisp video recording on a mobile camera at the time, it had the optical resolution to capture entire regular book pages, but at this resolution the camera has to be close to the paper and properly oriented, also it's not practical to record all of the frames. (Sure, there's one simple "round-about" - just picture the paragraphs manually, with your finger on the proper line. :X)
It seemed "obviously implementable", these gestures would be quite simple computer vision task, however I didn't started an implementation of the project. My supervisor suggested me it would be too much of an "engineering" project, while for a thesis it'd be better to be more "scientific" and propose some contributions, not that most of my colleagues were very far from being scientific.
I eventually ended up with a thesis on my microformant hybrid Bulgarian text-to-speech synthesizer, I developed the essential part of which for ~ 5 weeks as a first year student. Of course, I added additional projected and proposed functions and ideas for improvements and for a totally different design which is supposed to learn to speak like a baby, but I didn't have the time to implement the improvements - was too busy with my job then.
Text-to-Speech
Indeed Text-to-Speech, without speech-recognition, is useful way to save some time of being stuck on the chair in front of the computer, by reading your texts aloud, I use it myself.
The "baby synthesizer" of mine is not a junked project either, but the best way to implement it is with a robust general AGI system which is to be developed.
Recently I discovered that I've shared believes in this philosophical direction since my teenage works (found how it's called).
One introductory term - "paradoxical universality of brain" - brain is specialized in being universal.
The estimation of brain's enormous computing power and memory are arbitrary/not really practical, because brain power is not comparable to computer power. Computers are in fact more flexible and more "universal" than brain, and the ultimate cognitive system is a brain (generally intelligent agent/core, AGI) + a computer and external memory (neither a brain alone, nor a computer alone).
However, you can use a computer to simulate a brain if you have a precise model and speed, but you can't use a brain to simulate a computer, even if you have precise model - most brains are incapable to simulate on their own even the first mechanical computers (say, those which multiplied), logarithmic rulers and artillery tables with precomputed values, and also the papers to store reliably phone numbers or a line of text.
These external "tools" are in fact extensions of our brain - they are just accessed differently than memory which is internal - the access passes through motion (translation), which in fact is just the lowest level of output that the system produces, it's output of data to the physical reality and translation of data in physical reality. At the lowest level output should be decoded down to the "machine code of Universe", the "language of the particles" (my terms).
That's the same as the memory hierarchy in computers - from the fastest registers, to different levels of caches, RAM, hard disks, Internet to external media which requires to be physically moved in order to get accessible.
Culture starts from the changeable and stable intentionally altered and structured environment which allows artifacts/memory to survive longer than a brain and outside of a brain - that includes the overlapping lives of the different generations allowing language to be transmitted.
A human brain alone, without the external tools, without alterable structured stable environment, without intelligent beings around to teach it stuff and especially to teach it sophisticated language... In this conditions it wouldn't go much further than a chimpanzee's brain in similar conditions.
I've been involved a lot in working on my artistic career some months ago, but recently I've been turning on the research thread again - joined AGI mail list to notice the community there about the two AGI university courses taught in Plovdiv, and started to spend some time in AGI studies and reflection.
A comment of mine on Narrow AI-niks, in an Ben Goertzel article on "AI Nanny".
Matthew:Ben, I respect the opinion of AGI experts but because I have no programming expertise, I find it hard to ignore the plurality of “narrow” AI experts who believe AGI is not coming for a century of maybe never."
Todor:AGI is not about programming (unlike the AI-niks think) - it's about understanding how mind works. Programming would be the easiest part, once we do understand intelligence.
Plurality is not a strong argument - the majority of a population is supposed to have the minority of intelligence, and people who stick to the mainstream often are not "right", they're just obedient and narrow-viewed, have no own vision, and no guts to do anything radical on their own.
Regarding understanding - the boring narrow AI-niks are not trying to understand intelligence (they don't believe they could), rather they're coding and engineering on problems, which seem intelligent, but are obviously quite solvable and flat, just need some "tinkering" and testing to get done. Such as self-driving cars etc., which is one of the best achievement of the top narrow AI-niks.
С какво се занимаваш? - С всичко... Обаче дамите ще ме изкарат самохвалко, ако тръгна да изреждам, затова после ще ти кажа част от нещата, които може би най-много ще ги... развълнуват.
Как така? Не си ли програмист? - Ти не знаеше ли? Не че не създавам софтуер и че последният ми проект не подмина 30 хиляди реда код на С++, но го правя... По голяма нужда... Трябва ми, за (...)
(........)
Според теб какво търсят жените в един мъж? - Обикновено – скучни "неправилни" неща; поне според правилните мъже. Да си говорим откровено, това което реално търсят мнозинството жени няма (...)
I think this one is very obvious, I'll share my view on it:
We've felt like this as children, when we've seen the bigger ones and adults from down below and had to bend our necks to look up to see their eyes. Bigger and older ones made us feel inferior, and we really were - back then. (*See also below)
Similarly, we felt superior to the smaller ones, and that's why looking from higher angle feels like being "superior" to the subject on the picture.
Both are for pictures displaying people, particularly faces, and of course - particular emotions on the faces reinforce this feeling. Like low angle and insidious laughing face, or high angle and face blinking with eyes or seeming confused.
I guess the stature stereotype is of the same origin - shorter ones are associated with "smaller one" - children, which is associated to own memories of being weaker and inferior to taller ones. I guess that's why in general short actors are more rarely the protagonists, especially superheros, than taller ones. (Another reason probably is statistics - taller, handsome etc. is more rare ("special") than ordinary).
By the way, it is not "spiritual", "collective unconsciousness", "archetype" etc. like other common phenomena such as similar myths around the world. It's just similar conditioning, repeating experiences at different locations in similar/repeating circumstances, processed by similar generalization "processors".
* I'm not an expert in dog behavior, but I think this is related to the behavior of dogs. As little ones, they had to bend their necks up to see their mother's eyes, and this physically determined behavior is conditioned as a symbol of displaying inferiority or/and loyalty. Eventually dogs demonstrate similar body language towards their master/leader/provider (partially also for physical reasons - their heads are lower than ours, but not only - dogs try to look you in the eyes and come closer to your head).
More generally, I guess one of the discriminating skills of "social" animals is to be smart enough to understand and recognize those "providers" or "leaders", and to understand and "think" about inferiority and superiority of other agents, as grading expected benefits and dangers coming from them, which ultimately goes toward fight or flight decisions/reactions. This skill allow social animals to behave appropriately to maximize benefits according to other agents (whatever benefit is for the particular agents and circumstances).
"Give it to me!", or "Love's no Friend of Mine" - hard rock & punk musical, with music by Todor "Tosh/Twenkid" Arnaudov(*see below) and the great Bulgarian punk band "Kontrol". Starring Tosh and Anya Chuleva, directed by Tosh. *Played, arranged and partially created - parts from a jam session starting from Rainbow's "Love's no Friend" theme, transitions and progressions on guitar, bass and keyboard; rhythm fillings; a "Van Halen"'s cover, Kontrol's song improvised cover solo, some Offsrping's riffs, and film music arrangement of a transition between themes. Edited and composited with Twenkid FX Studio (prototype).
Well, I did the essential translation to English of that part of my old "capital works" an year ago, but gave up publishing it back then. In the mean time I prepared/translated a lecture for my second AGI course, which is more "disciplined" in structure than the raw writings.
Todor Arnaudov's [Teenage] Theory of Universe and Mind, 2001-2004
In the context of my teenage theory I'd suggest you checking out Valentin Turchin's view in his book "The Phenomenon of Science" (it's online) - especially his theory of hierarchies and Meta-System Transition, and in general, the book is a very good read.
It seems possible and funny that the AI grandfathers don't even know about their grandsons (see the blog) and their theoretical progress in AGI, and that many of them consider "AI" a retired (retarded) term.
I'd question their methodology and claim for pocket calculators as having such a share, but this one is the best:
"But Hilbert offers a humbling comparison. Despite our gargantuan digital growth, the DNA in a single human body still stores far more information - and a single "human brain computes far more calculations - than all the technology on Earth.
Brain computing power in terms of computers and calculations is much less << 1 instruction/sec or a FLOPS of any CPU... Can you please multiply 643.576 * 256.94? With a CPU you can granulate power and do anything at anytime - this is impossible with brain, it has virtual specialized fake "ZILLIFLOPS", from which sometimes it can't get even one FLOP on demand.
DNA information can only be interpreted as commands for building proteins, that can be read by specific mechanisms in specific moments in specific tiny-little locations, it can't be random, but is very dependent on producing particular living cells, and you can't access it as RAM or disk. (Yeah, there is "junk DNA", but it's not random accessible either.)
Digital data and media are different universes...
...
"Overload" is repetitions and lots of noise
A big portion of any kind of "information overload" of our time is caused by mass produced copies of all kinds of media. There are thousands (or tens or hundreds of thousands?) of television programs all around the world, big portion of their transmission is the same. The same "leading world news" from LA to Tokio. The same cinema blockbusters, the same music on the radio stations and on the computers, the same operating system and software on billions of same models CPUs and all kind of digital devices. And new versions are most of the time updates of the old ones etc.
Sure there are local variations, but the information to encode them is supposed to be astronomically less, compared to a dull measure of capacity for storing repetitions. (And there are local repetitions either - national news, circulating through national media, local news - through multiple local media... Students writing projects on the same topics etc.)
Another thing is that 25 GB of a blue-ray disk at 1920x1080 of the same movie won't left much more information in the memory of viewers than a 700 MB CD with noisy and grainy 640x360 video - you'll just feel the first image "more realistic, crisp, clear..." at the moment of perception, or you could notice some wrinkles, texture details or captions which are probably of little importance for the message of the work.
...
There is a lot of unique information, recorded as home/amateur photos and videos, but the big portion of it is duplicating either - 100 photos of the same event from close angles (little difference), tourists photos at the same sites - "me in Paris". And a big portion of the growth of the digital data is from cheap cameras with fake bombastic resolutions of 15 Mpix or so, producing noise and blurry images, more to justify the grow in portable memory capacity and to delude buyers than to deliver such quality - if the resolutions didn't grew, users wouldn't need anything more than 1-2 GBs memory cards.
As for "overload" - there might be 1000000 porn sites or videos of the thing you're searching for, you won't check them all. You'd usually stop the search after the first few sites, because they already satisfied your demand, namely because others are the same, regarding your query.
Indeed, web is full of repetitions - web forums and blogs on the same topic, web stores with the same reviews and same goods, media with the same news.
All people around the world have the same interests (such as food/health, sex, celebrities, movies, sport ... ) - classes of interests/topics and are supposed to have similar or the same views on the topics, where the set of possible views is limited, and the most "important" for the most users are repeated the most.
Think faster focus better and remember moreRewiring our brain to stay younger...
Some bookmarks/notes:
46 min + -- aging brain, older , noisier ... 46:30 - 47:30 older brain doesn't keep details, relies on abstractions, but the details are what can be remembered. 50:30 - cognitive test, recognize a sound, then a following, aging 20s - 20 times/sec, 80-ies 7 times/sec 6-8 samples syllable --> 2 samples per syllabple //51:00
Dr. Merzenich claims old brain can be "rejuvenated" to some extent and is applying a software system for regular practice. The results from such tests can be improved significantly by practice, e.g. 70-80 year-old can achieve results of people in their 30s.
There are some interesting postings on the early critical period of brain development, aging brain, neuroplasticity, autism and other topics in Dr. Michael Merzenich's blog: http://onthebrain.com/
IMHO articles are a bit chatty, but you can filter out useful information.
Declare it "founded". It doesn't need to be formal etc., it's something that exists and has existed anyway, this blog has been one of the tribunes of this society, but coining a name may assist in locating it and taking it more seriously. Whatever. :)
"Founding members" are: Todor Arnaudov and X (As of march 2012, his membership wsa disciplinary suspended for prolonged abuse of other members and sociopathic "god-complex" attitude). The best students attending my AGI courses are also part of this informal circle.
I've just started to publish translations in English of some of the only-in-Bulgarian yet lectures from the courses, starting with: - Slides on my foundation ideas "[Teenage] Theory of Universe and Mind [Intelligence]" from my youth years, which lead me to recognize the field. - The lecture on Hutter's and Legg's paper on Machine Intelligence
- Simple presentation about the basic principles of general intelligence. Oct 2011: Welcome to discuss on the forum of the society:
Update: It doesn't work anymore. May be opened in the future. It seems that some DNS were hacked and point to a forum that doesn't have anything to do with thinking machines...
Well, congratulations for the topics and for those guys reforming, but the term is ridiculous. It's obviously coined by orthodox scientifical AI-niks, who're reforming, while keeping their scientific roots (means citations, publications in "high-impact journals" etc.).
I'd stick to the terms AGI and UAI, like in Bulgarian I prefer "УИР" (UIR - Universalen Izkustven Razum) which encompasses both, instead of "УИИ" (UII - "Intelekt"). One of the simple practical reasons is that "УИИ" (UII) sounds too close to "хуй" (HUY) ~ it means "D." English word starting with dic*.
Come on!... :)
From the site:
"The importance of Artificial Intelligence in Portugal is visible by the number of PhDs (over 100), the sheer number of researchers ..."
Good... I'm not saying this for the first time, like for example about the "MEXICA" "creative" framework.
Sorry, but what the h* are all these PhDs doing? Hundreds, thousands, 10s of thousands researchers are supposed to be working on "GAI" projects full-time. Thousands of human-months work. Results? Real things?
Real things and progress is actually coming from non-PhDs, such as formally electrical engineer and neuroscientist Jeff Hawkins, or mathematics and physics PhDs, such as M. Hutter, J.Schmidhuber and B. Goertzel.
This reminds me that recently I found a video lecture by one of the authors of the notorious "classics" in AI: "AI: A Modern Approach", surprisingly mainly on NLP, collocations and stuff, pretending to be new directions in AI or something...
Very smart, funny and high-status guy, obviously, but I have never had patience to give his book a try, I just knew it's "classics", partially because long time ago I started to distinguish myself from "AI", insisting that if all that junk is called "AI", the least thing one could do is to coin a non-polluted term and stay away.
Now I glanced the content of this "classics" and saw I've not mistaken not reading it... Sorry, but I didn't see anything "modern" in a 1000 pages of random high-level non-scaling really classical methods (except a few chapters on agents and machine learning).
Dear respectful high status scientists and directors - give us results, please. Pretentious pedantically written PhD theses and a large number of publications or citations are meaningful status-symbol per se mostly in the pretentious environment of the scientific conferences and non-specialists surrounding where the only mean for the others to recognize how big researcher you are is to count your publications and citations or how high you are in the science status - PhD student, PhD, Post-Doc, .... Check the links above to understand what's wrong with that.
А live shot from the first sessiion on the set on Tuesday, a day before the recording of the first show. This moment was aired in the end of the prime time news on Bulgarian National Television.
A shot from the show Rossen explains how Kevin Warwick connects biological neurons to an artificial cybernetic system.
The letters on the top: (C,B) --> (L, R, m, M) --> (...) mean: (Coordinate, Brightness) --> (Length of reccurence, compression Ratio, miss, Match) ... Terms from some of the basics in Boris' definition of cognitive algorithm.
There were several spots within the show; one covering Bulgaria based Blue Gene (12 TFLOPS), which when first installed was in Top 100 of TOP500.org and have been the fastest machine between Munich and Moscow. There were a few about the FameLab Science Communication Comptetition, which is happening now for the 5-th year in Bulgaria and is the origin of this show and most of us, the participants who shape the discussions.
If you understand Bulgarian, you can watch the show here, it was aired on Saturday, 16/4.
There was a cool promo clip running very often on the BNT, the most promoted new show:
Regarding the AI/AGI part, in brief:
Special guest was Kevin Warwick from The University of Reading, UK. Maybe he doesn't need an introduction, but let's say he calls himself "the first cyborg".
Four young scientists, 3 of them with background in "FameLab".
- Rossen Ugrinov, winner of the first "FameLab" contest in Bulgaria in 2007. He's a biophysicist and is being involved in monitoring clinical testing; his point was about the risks and precautions.
- Nuri Ismail - he's part of the team developing one of the leading OCR software (free-onlince-ocr.com), he was presenting some narrow AI achievements - AI is here since many years, it's not as ambitious as in cinema, but there are many applications we're using. He was second in FameLab 2009.
- Svetlin Penkov - very bright second year student of K. Warwick, studying Robotics in Reading. He talked about some small robo-sumo robots, neuron networks, about the risks regarding brain-implants and "super humans" vs "under-developed humans" with no implants.
- Todor Arnaudov - I was supposed to talk about AGI, Universal AI. I joined the discussion in the end, with a short explanation of Seed AI and self-improving system as a remark to a note that current AI can't ask questions, which one of the other guests made - Lyuben Dilov Jr., a writer, publicist and son of the prominent Bulgarian SF writer.
I explained this is what we aim - to achieve a system that learns like a baby: it's blind and deaf and lack motor coordination, then it begins to recognize of simple objects and sounds; then the face of its "mother", then other faces; then it "crows", say its first words, first basic sentences... And after this, it will start asking questions; reading etc.
Words: Телевизия, Television, AGI, Artificial Intelligence, Brain-computer interface, Изкуствен интелект, универсален изкуствен интелект, универсален изкуствен разум, мислещи машини, cybernetics, cyborgs, Robotics, futurology, Лаборатория за слава, България, София, БНТ, Българска национална телевизия, британски съвет, телевизионно предаване, ток шоу, шоу за наука, Мария Силвестър, Леандър Литов, Любен Дилов - син, Пламенка Боровска, Петя Тетевенска, Мария Чернева, Росен Угринов, Светлин Пенков, Нури Исмаил, Тодор Арнаудов - Тош, Любов Костова, суперкомпютър, Blue Gene, Феймлаб, конкурс, разбираемо, говорене, наука, първи, кръг, комуникация, научна, CERN, ускорител, адронен колайдер, ядрена, физика, теория на разума, зародиш, компресия, предсказване, предвиждане, студио, Сан Стефано, първото, първи, брой, мозък, киборг, кибернетика, мозъчни, импланти, brain implants, bionics, интерфейс, interface, Matrix
I have heard of it, but checking the aims and basic assumptions of the project from the source pushed me - they're in the right direction and aware of the issues, Juergen Schmidhuber is being one of the leaders.
Another signal that I have to discipline my ass, as well... :)
The course was taught to undergraduate students between 1/2011 – 3/2011 at Plovdiv University “Paisii Hilendarski”, Bulgaria, in the Faculty of Mathematics and Informatics. (Originally in Bulgarian, with a lot of additional sugguested materials in English). This was the second AGI/UAI course after "Artificial General Intelligence/Universal Artificial Intelligence" (originally: "Универсален изкуствен разум") from last year; now putting stronger emphasize on the most advanced lectures on theories of intelligence and the common principles, meta-evoltution and Boris Kazachenko's works, now reviewed more thoroughly in class (as far as my understanding and students interest went).
You may find a lot of materials in English and links in this blog. There's somewhat sorted list by topic done for the students, but it might be partially incomplete, because it's not updated immediately with the blog posts. Students are advised to check out the blog for the topics from the first course, which were omitted in the formal program of the second one, such as other AGI researchers work and directions - there was too little time available in class... I guess the next more updated course is supposed to go even more deeper into formal models, maybe starting with some real basic AGI agents.
I'm preparing to publish slides in English (you can find Bulgarian ones in the course homepage on the top) - especially slides and translation of the old works from my "[Teenage] Theory of Mind/Intelligence and Universe", written between 2001-2004, which years later was how I recognized the "school of thought" I belonged to (see the annotation below).
This course could be taught in English as well, if there's an appropriate demand/place/invitation.
Annotation: Mathematical Theory of Intelligence
This course is addressed to students who wish to work in the novel interdisciplinary field of Artificial General Intelligence (AGI & UAI) which is building the theoretical foundations and research methodology for the future implementation of self-improving human-level intelligent machines - “thinking machines“ (AI was one of the predecessors of this field, but went into solving too specific problems). The course introduces students to the appropriate foundations in futurology and transhumanism, mathematics, algorithms, developmental psychology and neuroscience in order to finally review some of the current theories and principles of general/universal intelligence from the “school of thought” of researchers such as Jeff Hawkins, Marcus Hutter, Juergen Schmidhuber, Todor Arnaudov and Boris Kazachenko.
Course Program: (as of 11/2010) (Syllabus)
1. What is Universal Artificial Intelligence (UAI, AGI, „Strong AI“, Seed AI). Technological Singularity and Singularity Institute. Transhumanism. Expected computing power of human brain. Attempts for literal simulation of mammalian brain. "Universality paradox" of the brain. Ethical issues, related to AGI.
2. Methodological faults in narrow AI and NLP (Natural Language Processing), reasons for their limited success and limited potential. Review of the history of approaches in (narrow) AI and its failures and achievements up to nowadays. Concepts from AI that are prospective and still alive in AGI, such as probabilistic algorithms, cognitive architectures, multi-agent systems.
3. Mathematics for UAI/AGI: Complexity and information theory. Probability Theory – statistical (empirical) probability. Turing Machine. Chaos Theory. Systems Theory. Emergent functions and behavior. Universe as a computer – digital physics. Algorithmic Probability. Kolmogorov's Complexity and Minimum Message Length. Occam's Razor.
4. Introduction to Machine Learning. Markov's Chains. Hidden Markov Models (HMM). Bayes Networks. Hierarchical Bayes' Networks and Hierarchical HMM. Principles of the algorithms of Viterbi and Baum-Welch (Expectation-Maximization). Prediction as one of the basis of intelligence.
5. Drives of human behavior - behaviorism. Classical conditioning. Operant Conditioning and reinforcement learning as universal learning method for humans and machines. Why imitation and supervised learning are also required for AGI.
6. Introduction to Developmental Psychology (Child Psychology). Stages in cognitive development according to Piaget, and opposing views. First language acquisition. Nature or Nurture issues and how specific cognitive properties, behavior and functions could emerge from a general system.
7. What is intelligence? Thorough review of Marcus Hutter's and Shane Legg's paper “Universal Intelligence: A Definition of Machine Intelligence”. Universal Intelligence of an agent as a capability to predict sequences of actions with maximum cumulative reward. Types of agents in environments of different complexity.
8. Beauty and Creativity as compression ratio progress in the work of Jurgen Schmidhuber.
9. Brain Architecture – functional anatomy of mammalian and human brain. Triune theory - evolution of vertebrate's brain. Neurotransmitters and hormones and their relations to emotions and behavior. Mini-column hypothesis and functional mapping of the neocortex. Attempts for biologically correct simulations of the neocortex such as the BlueBrain project.
10. Evolution in biological, cybernetical and abstract sense: genetic, epigenetic, memetic and its application in design of complex self-organizing systems. Review of Boris Kazachenko's work on meta-evolution as Abstraction of a conserved core from its environment, via mediation of impacts & responses by increasingly differentiated adaptive interface hierarchy. Entropy as equation and increase of order, not increase of chaos.
11. Introduction to the theory of Intelligence by Jeff Hawkins. Modeling the function of human neocortex – the Memory-Prediction Framework and the principles of operation of the Hierarchical Temporal Memory.
12. Introduction to the theory of Intelligence by Todor Arnaudov – mind as a hierarchical system of simulators of virtual universes, that predict expected sensory inputs at different levels of abstraction. Hierarchical prediction/causation of maximum expected reward, where correctness of prediction/causation is rewarding. The Universe as a computer and trend in the evolution of Universe (cybernetical evolution). Proposal for guided functional simulation of the evolution of vertebrates' brain, starting by a general cognitive module that is simpler than mini-column.
13. Theoretical Methodology of Boris Kazachenko. Generalists and specialists, generality vs novelty seeking. ADHD and ASD. Attention, concentration, distractions and avoiding them. Induction vs deduction.
14. Introduction to the theory of intelligence by Boris Kazachenko. Cognition: hierarchically selective pattern recognition & projection. Scalable learning as hierarchical pattern discovery by comparison-projection operations over ever greater generality, resolution and scope of inputs. Importance of the universal criterion for incremental self-improvement. Comparisons of greater power and resulting derivatives, and iterative meta-syntax expansion as means to increase resolution and generality. Boris Kazachenko's Prize for ideas.
15. Summary of the principles of general intelligence in the works of Jeff Hawkins, Marcus Hutter, Juergen Schmidhuber, Todor Arnaudov and Boris Kazachenko: incremental [hierarchical] accumulation of complexity, compression, prediction, abstraction/generalization from sensory inputs. Evidences and real-life examples for the reliability of this principles.
16. Practice in introspection and generalization. Expansion of the scope of cases, where cognitive algorithm is applicable.
17. Exam. Update from 29/11/2011: Comments on the AGI email list of AGIRI: John G. Rose ... via jeeves.archives.listbox.com to AGI show details Nov 24 (5 days ago) Great course programs covering AGI summary/introduction, I like the selection of topics discussed. You might consider opening these up online via streaming/collaboration in the future… John ...
Ben Goertzel ... via jeeves.archives.listbox.com to AGI, AGI show details Nov 28 (2 days ago)
Tosh
Looks like a great course you're offering!n FYI ... On your page you note that our 2009 AGI summer school didn't cover jeff Hawkins' work... I can't remember if any speaker mentioned hawkins, but, Allan combs gave some great lectures on neuroscience, which covered hierarchical processing in visual cortex among other topics ;) That AGI summer school presented a variety of perspectives, it wasn't just about open cog and my own views ... But it wasn't heavy on perception-centered AGI... Ben ...
Todor Arnaudov's answers: Thanks, John.
There are materials from the course online (on the blog and on the site); most of the lecture slides and details are only in Bulgarian yet, though. As of collaboration - maybe, as long as I manage to create a team, for the moment I prefer keeping the authorship for myself. ...
Thanks Ben! And thanks for the notes. :)
Ben>That AGI summer school presented a variety of perspectives, it wasn't just about open cog and Ben>my own views ... But it wasn't heavy on perception-centered AGI...
All I knew about the summer school was from the brief web page on your site: http://www.goertzel.org/AGI_Summer_School_2009.htm Hawkins wasn't mentioned in the program, and it sounded reasonable not to be, as he seemed from a distant "school of thought" compared to the lecturers' ones - as far as I knew or assumed theirs.
Ben>I can't remember if any speaker mentioned hawkins, but, Allan combs gave some great lectures on Ben>neuroscience, which covered hierarchical processing in visual cortex among other topics ;)
That's nice (I've noticed neuroscience in the program), but anyway I think HTM and the other sensorimotor topics are more general - memory-prediction framework and the other similar models are supposed/aiming to explain virtually all kinds of cognitive processes with an integral paradigm, and vision is just an example/case. In a POV of schools, there's a distinction whether it's suggested that vision is an example of a general framework, or it's one of the sub-architectures/sub-frameworks for an AGI.
Selection of recommended talks on AGI 2010 conference:
Marcus Hutter - Universal Artificial Intelligence, AGI 2010
Tutorial on Mini-Column Hypothesis in the Context of Neural Mechanisms of Reinforcement Learning - by Randal A. Koene, AGI 2010
Related to M. Hutter - Jurgen Schmidhuber notices that reinforcement learning needs many steps, many decision points, and marks compression progress as an abstract form of reward for cognitive processes:
Jurgen Schmidhuber-Artificial Scientists Artists Based on the Formal Theory of Creativity, AGI 2010
The following one is mostly to get familiar about the state of the art of Virtual Worlds simulations for AGI - it seems pretty primitive... Taking into account also the demos with reinforcement learning agents in other talks, where the agents play simple games such as Pacman, "Pocman" (partially-observable environment), tic-tac-toe or tank games, or the dogs in Ben Goertzel's demo which learn to carry the object back to their master (coordinates of a bounding rectangle)...
I've been speculating about different virtual world systems for AGI seed AI and other intelligent agents experiments myself, but it was some 6 years ago; wished to start experimenting in formalizing so called my "Teenage Theory of Mind and Universe" and testing it in such worlds, but it was for a short time, and it has left as just ideas because of all the other stuff I had to deal with. However I am back, and certainly there's a lot of work to be done in this field.
Ben Goertzel - Using Virtual Agents and Physical Robots for AGI Research
I've been considering founding a sort of independent journal for AGI, where to give some more "formal" shape of works of Boris Kazachenko, myself and other independent researchers, it might be called just "e-zine". [ It was "formally declared" a month later here. ]
However it got a problem - it's not allowed to submit already published stuff. There are some AGI ideas and suggestions published or shared on-line by independent researchers many years before AGI world conferences have started to be organized (2008), in late 90-ies and the 2000s, such as Boris' ones. They are original, yet probably widely unknown by the official rulers of the field (I don't know for sure, I've noticed just a short on-line dialog between Boris and Ben Goertzel, which has happened some 8 years ago).
Edit: Further, there are a lot of AI-niks who have put on new shoes and have high positions, watching for high "scientific" standards in new publications. Unfortunately in this field being "scientific" often actually means being a very pedantic quoter, and ones who don't have the patience to write half of the paper with citations are not allowed to enter the "high-life" club of being "scientific" - I do emphasize this is about the field of AI, Check out: (heck What's Wrong With NLP [and AI]). AI is not really a science in the sense of Physics, Chemistry and Biology.
"In short, HyperNEAT is based on a theory of representation that hypothesizes that a good representation for an artificial neural network should be able to describe its pattern of connectivity compactly.
This kind of description is called an encoding. The encoding in HyperNEAT, called compositional pattern producing networks, is designed to represent patterns with regularities such as symmetry, repetition, and repetition with variation. (...) The other unique and important facet of HyperNEAT is that it actually sees the geometry of the problem domain. (...) To put it more technically, HyperNEAT computes the connectivity of its neural networks as a function of their geometry.
(...)
NEAT stands for NeuroEvolution of Augmenting Topologies. It is a method for evolving artificial neural networks with an evolutionary algorithm. NEAT implements the idea that it is most effective to start evolution with small, simple networks and allow them to become increasingly complex over generations. That way, just as organisms in nature increased in complexity since the first cell, so do neural networks in NEAT. This process of continual elaboration allows finding highly sophisticated and complex neural networks."
...
That is:
- Compression/Minimal message length
- Repetition as a clue for patterns (symmetry is repetition as well) - Incrementing (small scale to big scale)
- Coordinates (topology in connectivy)
2. Ontologies in NLP/Computational Linguistics
Basically this is a semantic network, i.e. relations between concepts. WordNet is a sort of ontology. The issue is that they are often designed by hand. There are statistical methods, as well, but they're missing something I've mentioned many times in the series What's Wrong With NLP.
Why this happens to be useful?
- Because it resembles real cognitive hierarchy - it's "skeleton hierarchy"
Accordingly, they're prone to be too rigid and unable to self-extend.
This is a direction I realized last year during a discussion on Boris' knols and mentioned there, but later I shortened the comment there, because it wasn't the appropriate place for the details
The idea is about designing cognitive algorithm achieving properties that Boris proposes, however grounding it and deriving it on a supposedly simpler and easier to understand cognitive algorithm that has existed before in lower species and was slightly modified by evolution.
- Embryogenesis is selective segmentation and differentiation
In general, organisms are deveolped by selective segmentation (separation) and differentiation of cells, a sequence of activation of appropriate genes.
- Small quantity of germ cells divide to form bulky tissues/regions - initial complexity is much lower than final and there are interdependencies. Simple mathematical example is fractals .
One reason neocortex may have relatively similar columns all over might be because they might be building block of the cognitive algorithm. However another reason, in another POV is that DNA has just not enough capacity to code complex explicit circuitry to make them all specialized by directed growing. Even if it had the capacity in theory, it's questionable whether biological "technology" would be capable to connect it with the required precision, because organism parts "grow like branches of a tree" ("The Man and The Thinking Machine", T.A. 2001) .
Bottom line: there are "leaves" of the tree, and the complexity of the leaves is limited.
- Evolution steps in phylogeny are supposed to be very small, and genome development is chaotic in mathematical sense - a small difference in the initial state (DNA) may lead to (apparently) vast difference in the final state - fully developed body.
Apparently big differences in structure may be caused by very small and elegant, functionally purposeful changes inside.
-Some of the operations that a mutation may cause could be, besides formation of a new protein: be or result in something like the following:
- Copy a segment (a block) once more, i.e. initiating division once more cycle
- Connect to another segmentation module (especially in brains)
- Amphibian's and Reptilian's forebrain, their most evolved part - archicortex/neopalium - has 3 layers. In comparison, general mammalian and human's most evolved (the external) part - the neocortex has 6 layers*
- Evolution, especially in brain, is mostly building "add-ons" , "patches" and slight modification and then multiplication of components(?)
Triune theory of brain, the new is a layer above, the old is preserved. The new modules are connected back to the old ones and have to coordinate their operation, and new modules receive projection from the previous. I think this implies also, that the higher layer should be "smarter" (more complex/higher capacity memory/processing power) than the lower, allowing more complex behavior/adaptation - otherwise it would just copy the lower layer results.
Amphibian's and reptilian's brains had cortex lacking 6-layer columnar structure of mammals, it's 3-layer (I don't know a lot about its cytoarchitecture yet). I couldn't accept that archicortex lacks some sort of a modular design, somewhat similar to the columns; it makes no sense for the archicortex to have been a random jelly of neurons, because even basic behaviors such as finding lair and running for cover require integration of multimodal information and memory. I don't believe also that mini-columns had appeared from scratch in the higher mammals.
Recently a little support on this speculation appeared; regarding birds, though, a parallel line of evolution:
"...A new study, however, by researchers at the University of California, San Diego School of Medicine finds that a comparable region in the brains of chickens concerned with analyzing auditory inputs is constructed similarly to that of mammals.
(...)
But this kind of thinking presented a serious problem for neurobiologists trying to figure out the evolutionary origins of the mammalian cortex, he said. Namely, where did all of that complex circuitry come from and when did it first evolve?
Karten's research supplies the beginnings of an answer: From an ancestor common to both mammals and birds that dates back at least 300 million years.
The new research has contemporary, practical import as well, said Karten. The similarity between mammalian and avian cortices adds support to the utility of birds as suitable animal models in diverse brain studies.
"Studies indicate that the computational microcircuits underlying complex behaviors are common to many vertebrates," Karten said. "This work supports the growing recognition of the stability of circuits during evolution and the role of the genome in producing stable patterns. The question may now shift from the origins of the mammalian cortex to asking about the changes that occur in the final patterning of the cortex during development.
- The function of the Archicortex (hippocampus) in mammals is declarative memory and navigation.
See some of my speculations on: April 24, 2010 - Learned or Innate? Nature or Nurture? Speculations of how a mind can grasp on its own: animate/inanimate objects, face recognition, language...
Hippocampus
- formation of long term memory - navigation - head direction cells - spatial view cells - place cells
At least several or even all of these can be generalized. Places and navigation go together. Places are long-term memories of static immovable inanimate objects (the agent has not experiences that these entities move).
Navigation, head-direction, spatial-view, place-cells - they all are a set of correlations found between motor and sensory information, and long-term memories, which are invoked by the ongoing motor and sensory patterns.
The static immovable inanimate objects (places) change - they translate/rotate etc. - most rapidly in a correlation with head direction (position) and head movements.
Navigation and spatial view are derived from all.
Boris Kazachenko's comment:
(...) Regarding hippocampus, it controls formation of all declarative (not long-term) memories, not just about places. Declarative means the ones that got transfered high enough into association cortices to be consciously accessible. My personal guess is that hippocampus facilitates such transfer by associating memories with important locations [mapping] . You'll pay a lot more attention to something that happened in your bedroom then to the same thing that happened on the dark side of the moon. I call it "conditioning by spatial association". (...)
// There's a whole topic about hippocampus functions and its competition with neocortex, for now I plan to put it alone and link to this.
Reptiles don't have association cortices, though, yet pretty impressive behavior of lizzards could be seen, such as this curious iguana looking behind the mirror to see where the other one is, and eventually hitting the mirror - see at 5:33.
My guess about archicortex' contribution is discovery of means to:
- Archicortex maybe records exact memories and correlations between memories/compare for match between sequences of sensory patterns
There should be limitations of the length of sequences, part of it might be caused by size constraints - animals with archicortex only, lacking the higher layers* just have very small brains. (*Cingulate cortex and neocortex for mammals)
I'm not an expert in vertebrate embryology yet, but I guess a simple reason why fish and reptiles with big body keep very small brains - like 3,6 m white shark with 35 g of brains should be that:
- Germ cells that give birth of brain tissue of fish and reptiles divide less, or/and these species lack some hormonal growing mechanisms that species with bigger brains have
Both are a sort of "scaling issues".
Too small a brain has insufficient cognitive resources. On the other hand, these brains maybe don't scale also, because they wouldn't/didn't work better if they are/were bigger.
- Assuming general intelligence is a capability for ever higher generalization, expressed in a cognitive hierarchy (see J.Hawkins, B. Kazachenko, T. Arnaudov) and mini-column is assumed to be the building block of this process in neocortex, there should be a plausible explanation of why and how this module was formed and why this function gets successful
My functional explanation is the following:
- There already existed templates of circuits for exact recording, but they didn't scale - The simplest form of generalization is recording at lower resolution than the input, and fuzzy comparison. It's partially inherited by the imprecise biology. - Updated form of these circuits maybe added more divisions and cascade connections (and this may have started in cingulate cortex or higher reptiles, as well) which allowed for hierarchical scaling. Neocortex is assumed to have 6 layers, archicortex has 3. I'm not an expert in cytoarthitecture, and should check out cingulate cortex, but if there are no inter-stages between 3 and 6, this sounds suspicious for simple doubling somewhere during division and specialization. Or it could be several doubling operations. - These new cascade connections allow for deeper hierarchy, scaling and multi-stage generalization. (Recording "exactly" alone is "generalization" and lossy, but without hierarchy which is deep enough this cannot go far - just for coping with basic noise.) - There are mice with less than 1 g of brain which of course are much smarter than sharks (not to mention smart birds); however the advantage in micro-structure (mini-column) doesn't deny that mammalian brain scales in size and there is a correlation between brain size (cognitive reources) and intelligence, even though it's not a straight line. Spindle neurons, connecting directly distant regions in the neocortex are one of my guesses about why pure size might be not enough; another one is the area of the primary cortices, especially somatosensory (elephants, dolphins and whales have bigger brains than humans). See Boris' article about Spindle neurons and generalization: http://knol.google.com/k/cognitive-focus-generalist-vs-specialist-bias
- Neocortex does scale, but it's not surprising that it has constructive limitations as well as archicortex did.
- Classical and Operant conditioning, dopamine and temporal difference learning
It's quite a global feature of entire brain, maybe; but it has to be considered - classical evolving to Operant requires predictive processing.
Conclusion
1. Design a basic cognitive algorithm/module, scalable by biologically-like mechanisms, which allows reaching for, say, reptilian behavior. 2. Tune this basic module, multiply and connect intentionally to form a mechanism that "stacks" on a hierarchy, generalizes and scale the global cognitive capacity.