Monday, October 31, 2011

News: AGI Forum Opened (Alpha) and Twenkid Research - updated web site (Alpha) | Форум по Универсален изкуствен разум (изкуствен интелект) на Тодор Арнаудов - Тош

Welcome to the updated web site of "Twenkid Research" (Alpha) and to the AGI Forum of the Independent Scalable AGI Society which opened for participants, but it's "Alpha" as well, need to be filled with some more pinned topics, information and links in the sections - I prepared plenty of them.

http://research.twenkid.com

Update: It doesn't work anymore. May be opened in the future. It seemed that some DNS were hacked and point to a forum that doesn't have anything to do with thinking machines...

Enjoy

...

Колеги, заповядайте във форум по Универсален изкуствен разум (универсален изкуствен интелект, "силно направление" в ИИ, мислещи машини) на Обществото на независими изследователи, създаден от Тодор Арнаудов - Тош. Включва подфоруми в многобройни свръзани области и подфорум на български език.

...
Words: artificial general intelligence, thinking machines, forum, discussion, independent, scalable, AI, UAI

Saturday, October 22, 2011

News - To Open - Discussion Forum of the Independent Scalable Artificial General Intelligence Society

Recently I realized how the conservative culture in Academia has emerged - somebody got pissed off of bullshit in open discussions with no moderation, bad selection and low standard for the participants in the discussions. Unfortunately this shit happens on a regular basis in the AGIRI AGI list. Some people has noticed it, but they have given up to do anything about it.

Sorry! My solution is a moderated forum with higher standards and requirements, starting with invitations to participants.

Technically the forum is already created, but it's still hidden and under-construction/testing. Participants will be invited to join a bit later.
...
Update, 28/10/2011: The forum has opened for participants (Alpha Version)

Update: It doesn't work anymore. May be reopened in the future. It seems that some DNS were hacked and point to a forum that doesn't have anything to do with thinking machines... :))

Regarding the Independent Scalable Artificial General Intelligence Society:

http://artificial-mind.blogspot.com/2011/04/news-independent-scalable-artificial.html

Tuesday, October 18, 2011

Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies

Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies

Continues from Frontal Lobe Activation Patterns in Pessimistic & Optimistic Brains, and in Infant Brain Before and After Understanding of Object Permanence


1. Higher-to-Lower level feedback is less efficient than Lower-To-Higher level feed-forward generalization.

Example: Image/Object Recognition vs Image/Object Rendering, performed by humans.

Every healthy child can recognize human faces, understand emotions and react accordingly, but it's not that easy when intentions are involved - feedback/output to act over environment. Realistic drawing or painting of faces or good acting in a film require time, talent and practice.*

(*I guess I can get the critics that regarding painting/drawing it's just "precision vs scope", some autistic people are great in copying inputs. However drawing by memory and creative drawing of imaginary subjects require both scope and precision and capability to keep consistent mapping between all the levels, from the highest to the lowest.)

2. Generalization out of specifics (rich sensory input) is simpler/more efficient than specification down from generalizations ("decompression")

Generalization is selective lossy compression. Decompression requires reconstruction of the lost data, or requires that data is preserved or the knowledge how to extract it from external memory and incorporate it is kept. Here is the point of "precision vs depth", however I'll emphasize that sometimes a lot of both is needed.

Besides, I suspect depth has more severe limitations than precision for the lower levels. The cognitive hierarchy can add generalization levels at the expense of wider scope of input data and/or lower detail, but these both are problematic - if the scope is extended at the expense of detail (to keep computational complexity under control), then generalization levels will run out shortly, because there won't be meaningful details remaining. On the other hand, if the scope is extended with more modest detail lost, then learning will go computationally out of control.

We don't know where a machine can go in generalization levels with more computing power, but I think brain is very limited.

3. Higher level sensory inputs have less of impact over the cognitive hierarchy, because the feedback is less efficient than feed-forward.

That's the reason why captions "Smoking kills" or "Speeding kills" usually have no effect to make people stop smoking or stop speeding and why first-hand experience - to see with your own eyes, to hear with your own ears and to touch with your own fingers - have more dramatic effect in transmitting any message, than relayed experience of others.

Seeing your friend smoker dying of lung cancer or seeing your friend smashed in his car because he drove drunk - that's a pretty different sensory input - and not only because of your personal involvement with the sufferers.

Text is too abstract and distant - the low level physical representation of text is meaningless, it serves only to encode a higher level representation - that's where the input starts having a meaningful impact to the brain, and the message has to go down in the hierarchy to have actual impact on the behavior.

Another example is acting and film. Film as media demands providing motion pictures and rich sound - the physics of the action at the lowest level possible with a lot of details/high resolution of the input. If there's too much of a dialogue and too much of self-explanations and declarations by the characters, especially of obvious things - then brain is fed with high level generalized input, it can't activate the lower levels from the top-down, and they're idle or "bored". On the other hand if the lowest sensory input is rich, brain can induce up generalizations and engage the entire hierarchy. (Besides the effects of the balance of unpredictability etc., see Schmidhuber's works on Creativity)

4. Rationalization is playing with high-level patterns to explain lower level patterns, which the higher level cannot access, because of the fact that feedback is worse than feedforward, or because lower level patterns are unknown

Higher levels in the cognitive hierarchy are derivatives of the lower levels - the lower levels induce the higher ones, not vice verse. In a sense (a bit simplified), higher level patterns in the cognitive hierarchy are a delayed expressions of the lower level ones. However once a higher level emerges out of the stable regularities in the lower level, it starts to mess with the lower level business - adjusting lower level input, selecting data to keep attention on; adjusting coordinates, resolution, location; and the higher level does in order to maximize its own "success" - match, prediction, reward.

Higher levels usually cannot explain and trace back how they are created, what their lower level patterns are and what are the lower level drives.

Similar situation is with bad philosophy and other fields* where lower level conceptualization, patterns and input are wanted for a conceptual progress, but practitioners deny it and keep blah-blah-ing with concepts which are too high a level, too general, too unrelated to the problem they're trying to solve.

That's also one of the reason for researchers such as Boris Kazachenko and myself to suggest bottom-up approach - it allows for the maximum possible abstraction, while keeping maximum possible resolution and keeping the traces of the abstraction.

*Search the blog with "What's Wrong with Natural Language Processing

5. There are Two Reward Systems which are Messed Up

There's another issue - two reward systems run in parallel in brain. A cognitive and a physical. Cognitive system aims at maximizing predicted match of pure data, while the physical system aims at maximizing desired match - input sensations must match hardwired target sensations loaded with value - food, warmth, water, sex etc. The physical is way more primitive and crude, it relies a lot on the more primitive brain areas and on dopamine and other neurotransmitters/neuromodulators/hormones, while cognitive system is based on finer processing, even though the former participate also. Both systems interact and overlap, the physical system can override the cognitive and make it a slave - for example higher level cognition of drug addicts is a slave of the primitive need to take the drug. Generally these systems are messed up and tangled, so it's hard to trace where starts which in real behavioral record.

6. Rationalization is also explaining physical motivation with cognitive means

Ask somebody why she loves her boyfriend. She's likely to tell you "because he's smart, funny, kind, blah, blah and because he'so soooo blah!", while the real reason is much simpler - the way he makes her feel. That's why she loves him, where "to love" has also more basic meaning than the societal one - it's about quantities of neurotransmitters and about "imprinting" of addiction-cycles of generating such neurotransmitters by physical-cognitive conditioning, inter-association.

It's true that the abstract reasons do play certain role in creating the inter-associations between physical and cognitive sensations - everyone has some preferences and favorites, - however this can be reduced to:
- I love him because he's the type I wanted him to be!
- I love him, because he's my perfect match!
- He's the best match I could find so far...

This is a match between desired and input, which is the type of match of the physical reward system - apparently this selection is driven by the physical system, overriding the cognitive.

That's why I think the abstract reasons of "smart, funny..." are "rationalizations", but rationalization is not strange at all. Everybody would say "yeah, socially acceptable explanations", but that's a cheap answer.

There's one more appropriate reason for rationalization - it's the cognitive system which is asked the question (it's asked in a natural language); the highest levels in the cognitive hierarchy are ruling this area, and yes - the society has taught this higher cognitive system how it should act in such situations etc.

If you ask the question in a lower level language, the answer is different - that's her body language and her behavior when she's with her boyfriend alone, when they're kissing and making love. The answers are also in the amount of oxytocin, dopamine and other chemicals in her brain, the release of which is conditioned with the certain cognitive patterns initially generated by perceiving her beloved one.

In general, for love and attraction it's true by the definition of "emotions" that the physical reward system kicks in. One may be "just a friend" with someone because he's "smart, funny, blah-blah", but even then if one is a human being with a brain in tact (not a sociopath/psychopath), his friendship relations would be messed up with the physical reward systems - emotions, crude emission of certain chemicals and activations of primitive brain areas, which is associated/recorded/conditioned with cognitive patterns, and both are inter-twined.

Purely cognitive "friendship" is business and if it's such, it's not really a friendship.

*Higher/Lower level drives - Passionate feelings demand for "lower" drives, where "lower" in this context has different meaning than lower in the cognitive hierarchy. The meaning here is driven by evolutionary and physically "lower" brain modules, which maps to areas different than the neocortex, archicortex (hippocampus) and thalamus. However physical reward system is not in the same hierarchy as the cognitive system and the levels regarding tracing back, cognitive and physical reward systems are entangled and the physical system can manipulate and "short circuit" all levels of the cognitive hierarchy.


Continues... - On the apparent inconsistency of the goals of a system with cognitive hierarchy and More on rationalization and the confusions of the higher levels in the cognitive hierarchy.

Suggested reading:

Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence - T.A. 2004

http://knol.google.com/k/cognitive-focus-generalist-vs-specialist-bias

http://knol.google.com/k/boris-kazachenko/executive-attention/27zxw65mxxlt7/11#

http://knol.google.com/k/intelligence-as-a-cognitive-algorithm

http://research.twenkid.com/agi_english/

Slides on T. Arnaudov's "Teenage Theory of Universe and Mind"


(C) T. Arnaudov 2011

Wednesday, October 12, 2011

Frontal Lobe Activation Patterns in Pessimistic & Optimistic Brains, and in Infant Brain Before and After Understanding of Object Permanence

Quotations from

"Brain 'rejects negative thoughts'"


When the news was positive, all people had more activity in the brain's frontal lobes, which are associated with processing errors. With negative information, the most optimistic people had the least activity in the frontal lobes, while the least optimistic had the most.
It suggests the brain is picking and choosing which evidence to listen to.

Interpretation of mine: frontal lobe is supposed to be the highest level of processing, which includes the top of the iceberg of conscious data (reflectively accessible), which allows to see that it's about "errors". Lower levels are also about "errors". However if the errors (mispredictions) are too high at lower level (perceived differs expected), this perceptions may not elevate up, they're "meaningless" at conscious level.


- The most optimistic subjects expect positive data, therefore negative data is misprediction/mistake and it's cut before reaching highest levels.

- The least optimistic ones expect negative data, and such evidence is a match to prediction, so the evidence is processed at the highest level.
I suspect this may imply that the reason for data to be cut is prior to the frontal lobes. It demands expected outcomes, but it's developed to want them from the machinery before it.

This research reminds me Natalie Portman's famous paper (sure, famous for her fans): http://mindhacks.com/2007/06/18/natalie-portman-cognitive-neuroscientist/

More formally:

Frontal Lobe Activation during Object Permanence: Data from Near-Infrared Spectroscopy

- Infants who understand object permanence (searching a toy which is covered under a cloth in front of their sight) are measured to have increased activity in their frontal lobe after the toy is covered, while the ones who don't understand the object is still there (and don't search for it) display a decrease in frontal-lobe activity.

This is a crude measure, but I guess:
The highest levels are expecting that the object should be under the cloth, so they are working actively - sending feedback down to the lower ones to find out where the object is and to adjust input so that they get confirmation of that higher level hypothesis.The frontal lobe of infants prior the understanding is less activated, perhaps because it doesn't have predictions to compare with lower level. Lower level processing dominates and the patterns in frontal lobe are too noisy or lacking.

Suggested reading about what I mean with those "levels":
http://knol.google.com/k/cognitive-focus-generalist-vs-specialist-bias
http://knol.google.com/k/boris-kazachenko/executive-attention/27zxw65mxxlt7/11#
http://knol.google.com/k/intelligence-as-a-cognitive-algorithm
http://research.twenkid.com/agi_english/


"Dr Chris Chambers, neuroscientist from the University of Cardiff, said: "It's very cool, a very elegant piece of work and fascinating.

"For me, this work highlights something that is becoming increasingly apparent in neuroscience, that a major part of brain function in decision-making is the testing of predictions against reality - in essence all people are 'scientists'."

Good morning, you finally noticed...

To be continued... --> On Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies

Wednesday, October 5, 2011

Human-Computer Interface Cool Devices - Computer Vision, Projection, Gesture, Speech Recognition and Text-To-Speech - Pranav Mistry, IPhone4 and mine

Impressive, I need something like the Pranav's devices for myself.





Research Assistant

That reminds me of a project of mine called. Well, a secret. :P It was/is supposed to integrate a lot of tools in an intelligent way and to boost your performance, saving you all kinds of labor intensive tasks, and it was supposed to monitor your actions and behavior.

I have thought of developing a part of it as MS thesis in early 2008, such as marking of important paragraphs and pages from a book/paper with a simple gesture while you're reading (draw a line with your finger on the side of the paper), then storing the selected parts as images on the computer. OCR would assist in faster classification and for search, but it was supposed to be not robust and was not critical, even without it that's very useful for collecting excerpts/cites from philosophy books, newspapers and magazines, promotion booklets etc., in order to perform "batch processing later" without switching your attention. It saves time and distractions.

The best would be if you could have a head-mounted camera, but more realistically for the experiment seemed a fixed camera, where the book/paper would be set on the table, the video - recorded on a camera at 640x480x15 or 30 fps, and then processed off-line.

One of the technical issues was the resolution - I got 640x480x30 of crisp video recording on a mobile camera at the time, it had the optical resolution to capture entire regular book pages, but at this resolution the camera has to be close to the paper and properly oriented, also it's not practical to record all of the frames.
(Sure, there's one simple "round-about" - just picture the paragraphs manually, with your finger on the proper line. :X)

It seemed "obviously implementable", these gestures would be quite simple computer vision task, however I didn't started an implementation of the project. My supervisor suggested me it would be too much of an "engineering" project, while for a thesis it'd be better to be more "scientific" and propose some contributions, not that most of my colleagues were very far from being scientific.

I eventually ended up with a thesis on my microformant hybrid Bulgarian text-to-speech synthesizer, I developed the essential part of which for ~ 5 weeks as a first year student. Of course, I added additional projected and proposed functions and ideas for improvements and for a totally different design which is supposed to learn to speak like a baby, but I didn't have the time to implement the improvements - was too busy with my job then.




Text-to-Speech

Indeed Text-to-Speech, without speech-recognition, is useful way to save some time of being stuck on the chair in front of the computer, by reading your texts aloud, I use it myself.

The "baby synthesizer" of mine is not a junked project either, but the best way to implement it is with a robust general AGI system which is to be developed.

Tuesday, October 4, 2011

Externalism - or on the External Storage and Tools that Extend Human Mind and Brain and are Indispensable for its Development

Recently I discovered that I've shared believes in this philosophical direction since my teenage works (found how it's called).

One introductory term - "paradoxical universality of brain" - brain is specialized in being universal.

The estimation of brain's enormous computing power and memory are arbitrary/not really practical, because brain power is not comparable to computer power. Computers are in fact more flexible and more "universal" than brain, and the ultimate cognitive system is a brain (generally intelligent agent/core, AGI) + a computer and external memory (neither a brain alone, nor a computer alone).

However, you can use a computer to simulate a brain if you have a precise model and speed, but you can't use a brain to simulate a computer, even if you have precise model - most brains are incapable to simulate on their own even the first mechanical computers (say, those which multiplied), logarithmic rulers and artillery tables with precomputed values, and also the papers to store reliably phone numbers or a line of text.

These external "tools" are in fact extensions of our brain - they are just accessed differently than memory which is internal - the access passes through motion (translation), which in fact is just the lowest level of output that the system produces, it's output of data to the physical reality and translation of data in physical reality. At the lowest level output should be decoded down to the "machine code of Universe", the "language of the particles" (my terms).

That's the same as the memory hierarchy in computers - from the fastest registers, to different levels of caches, RAM, hard disks, Internet to external media which requires to be physically moved in order to get accessible.

Culture starts from the changeable and stable intentionally altered and structured environment which allows artifacts/memory to survive longer than a brain and outside of a brain - that includes the overlapping lives of the different generations allowing language to be transmitted.

A human brain alone, without the external tools, without alterable structured stable environment, without intelligent beings around to teach it stuff and especially to teach it sophisticated language... In this conditions it wouldn't go much further than a chimpanzee's brain in similar conditions.