Tuesday, January 8, 2013

// // Leave a Comment

Universal Artificial Intelligence - Matches of Todor Arnaudov's teenage works from 2001-2004 to later (2005-2006-2007) related more popular works by Marcus Hutter and Shane Legg, Additional Discussions and other Todor's works, and a new Challenge/Extension of the reasons for Herbert Simon's "Bounded Rationality" (Satisficing)

Abstract/Introduction
I've mentioned this with less details in translations and in slides, here I show more explicitly matches regarding the UAI definitions of Hutter and Legg, condensed in this abstract:

A Formal Definition of Intelligence for Artificial Systems (abstract)

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.97.7530

See also: Definition of Machine Intelligence" (2007)

Slides, prepared by Todor for teaching that paper to students in his AGI courses
Презентация на български
Slides in English

In another section I refer to (remind about) other related works of mine and discuss Herbert Simon's bounded rationality, where I give an additional reason why the "sufficicing".

The end part of the work, with an excerpt from one 2001. work of mine, "Man and Thinking Machine" which introduces AGI tests.

A natural continuation of this work would be to continue that discussion and my views on testing machine intelligence. One of the first publications in this blog was in this topic, informally introducing the "Educational Test" - stay tuned.

* M. Hutter has earlier publication on UAI, too (find AIXI), I myself found him and Schmidhuber in September 2009.
 ** Regarding the compression-prediction, shortest/simplest (algorithimic complexity part) - that's an intrinsic part of my "teenage theory", but I won't elaborate it in this very article.

Copyright note: The texts written and translated by Todor are (C) Todor, All rights reserved etc. :)
 The cited texts from the other authors are theirs, and they are used for research purposes and for appropriate comparison and illustration.***

*** More on copyright - later... :) It was also in the teenage works, recently reprinted one here, but in Bulgarian only.
**** Languages used: English and Bulgarian/Български

***** Last edited: 8 Jan 2013, 03:30

Contents
1. "Nobody knows what intelligence is"...
2. Definition of Machine Intelligence
3. Supposed Reasons for Behaviorism Avoidance by the Psychologists Community
4. Introduction of the Principles of Human Behavior Drive - More Pleasure and Less Displeasure for Maximum Possible Time
5. Observation, Reward, Action and Bounded Reward
6. Meta-Physical reasons for reward and punishment
7. Local maximum, biggest sum of pleasure for a given time ahead (...) - a multi-topic paper from 2004
8. Extension to the Bounded rationality - there are no non-trivial global undeniable single optimum external rewards for intelligent agents as complex as humans*.
9. Physical vs Cognitive rewards
10. The period ahead should be short enough for the target resolution, range, domain etc.
11. Defining Universal Intelligence, Turing test faults, better tests...
12. Memories and background of the author in 2001
Appendices:
- Original texts in Bulgarian of part of the citations
- Link to another short discussion on the matches between Hutter and Legg's setting and my works, published in an article from 2/2010


Article Begin

1. "Nobody knows what intelligence is"...


Hutter and Legg 
"...A fundamental difficulty in artificial intelligence is that nobody really knows what intelligence is, especially for systems with senses, environments, motivations and cognitive capacities which are very different to our own..."

Todor Arnaudov


That's one of the absurd claims of some AGI-ers and of the AI-ers, and IMHO it's perhaps caused by  linguistic "insufficiencies" leading to conceptual ones. Anything that a mind call with a word have and should have a referent, and the referent is something which is available there in the sensory data, or in data that can be derived or implied from it. Anything can be traced to some roots. Any two things have something in common when seen generally enough, even if that's only that they are "things", "entities", are in the same memory and namely those general things are "what intelligence is", or whatever "anything that's named and used anyhow is".

If two "things" are so different that the common can't be found ("objectively" or for the evaluator), then obviously those two things cannot be put in the same category, or they need their categorization and classification to be revisited. Also, if a concept is so vaguely defined, that it refers to too many things - then its meaning should be specified more precisely and explicitly, or to be explained by referring to sensory records examples.

Hopefully, Hutter and Legg do claim that they know what universal intelligence is:

2. Definition of Machine Intelligence
Hutter and Legg:
"...To formalise this we combine the extremely flexible reinforcement learning framework with algorithmic complexity theory. In reinforcement learning the agent sends its actions to the environment and receives observations and rewards back. The agent tries to maximise the amount of reward it receives by learning about the structure of the environment and the goals it needs to accomplish in order to receive rewards. ..."

Todor Arnaudov:
IMHO the RL is generally trivial direction, known since "hedonistic" and utilitaristic philosophy and economical theories, and that approach should have been taken long ago, but there's another research nonsense I wish to criticise: it didn't seem obvious due to the psychologists "fear" and disgust of the behaviorism. 

3. Supposed Reasons for Behaviorism Avoidance by the Psychologists Community

My explanation is that it's the (general) hard-sciences insufficiencies of many psychologists, I have heard and seen for myself their "hatred" about behaviorism, which is not surprising - it treats animals and humans as deterministic machines etc., while most people, including psychologists:
- Prefer believing in magic and not to really understand/define the subject matter in "technical" terms
- Don't really understand behaviorism, perhaps don't see the asymmetry of data exchange inside the system to the data exchange with the environment (observations are not enough for complete model, it's a "Hidden-Markov Process), which doesn't diminish behaviorists observations and laws, when  particular assumptions and particular resolution is set on the unobservable underlying data.
- See determinism and behaviorism wrongly, and treating randomness as "free will" - there is randomness in human behavior, but results achieved by randomness are not merits of the evaluated person.
- Don't see the multi-agent/non-integral self, which is evident both in behavioral data (any observation of a human being is behavioral data and experiment with results) and in the  neuroscience (how brain works) and decision making "irrationality". The multi-agent/non-integral self explains the apparent irrationality of human behavior, while the subsystems/virtual "selfs"/virtual control-causality units/brain modules can be rational through all the them. See:That confusion is discussed many times in old works regarding soul, free will, desires for being "magical" and reasons to believe so, the confusions about rationality etc. in the works cited here, in many others and in public email-group discussions.

See for example:
http://artificial-mind.blogspot.com/2012/11/nature-or-nurture-socialization-social.html
http://artificial-mind.blogspot.com/2012/02/philosophical-and-interdisciplinary.html
4. Introduction of the Principles of Human Behavior Drive - More Pleasure and Less Displeasure for Maximum Possible Time


From the abstract of "Teenage Theory of Universe and Mind, Part 2" (Conception about the Universal Predetermination, Схващане за всеобщата предопределеност, част втора) -  an epistolary work of Todor Arnaudov in a dialog with the philosopher Angel Grancharov. 
Ffirst published in "Sacred Computer" e-zine in 9/2002, in Bulgarian
The drive towards lesser displeasure is the engine of human behavior. The primary/initial pleasure is our desire to be fed and dry. Subsequently pleasure is conditioned with the "higher departments" of the cortex of the cerebrum. After learning, new forms of "mental" pleasures are created, besides the directly-sensual ones: joy-sadness, pride-shame, pleasure-displeasure, happiness - misery etc.

"The self-preservation instinct" is only a special case, when man believes, that his life is bringing him less of displeasure, than death would cause. After death is realized as "the lesser of two evils" than living, human purposefulness is directed towards it.... *2002-1 (see original text)

In this work I also notice that the religious concepts of "heaven" and "hell", which are spread in most of the religions, are concepts which illustrate and justify this fundamental drive, as extremes of the both ends of the pleasure-displeasure (reward/punishment) spectrum. The best reward is endless pleasure and calmness, the worst punishment is endless pain.

5. Observation, Reward, Action and Bounded Reward


Hutter and Legg:  (bold added by Todor)
"...To denote symbols being sent we will use the lower case variable names o, r and a for observations, rewards and actions respectively. The process of interaction
produces an increasing history of observations, rewards and actions, o1r1a1o2r2a2o3r3a3o4 : : :.
The agent is simply a function, denoted by , which is a probability measure over actions conditioned on the current history, for example, (a3jo1r1a1o2r2). How the agent generates this
distribution over actions is left completely open, for example, agents are not required to be Turing
computable. (...)
Additionally we bound the total reward to be 1 to ensure that the future value ... is finite. "
...

Todor Arnaudov
From  "Teenage Theory of  Universe and Mind, Part 3", first published in July 2003 in "Sacred Computer" (in Bulgarian)

39. People are afraid of retribution –
displeasure, initiated by conscience – like with
animal training, where man is taught to feel
guilty, to suffer
.

[bold - added now]
If the “programming”; the training of a
conscience is for preventing crimes and harms,
then conscience is necessary for a stable and
robust functioning of society, expressed as
lowering as much as possible the probability for
events which lead to a bigger displeasure, which
would make the distance of the system to its
goal (pleasure) bigger.


Conscience and the systems describing “decent
and indecent” and “good and bad” behavior
should lower down the total amount of
displeasure and to increase the pleasure in the
whole, at the expense of single members of the society.

Using appropriate programming (breeding,
upbringing, training ) society – the aggregation
of interactive agents, each of them aiming at
suffering less displeasure – is aiming at limiting
the chance of achieving single “big pleasure”
which however decreases the total amount of
pleasure in society.


For example if someone robs a bank, he gets
rich and becomes a possessor of means for
achieving higher pleasure in the system of
society. In the same time, though, he causes a lot
of displeasure to people, connected with the
bank, and they are not only the ones who were
inside the room where the robbery happened.

The people there were frightened, there could
have been hurt or even dead ones, [this reflects
also to the people related to these people], and
others may lose money, [feeling of security,
prestige, job, ...].

So the pleasure of the system consisting of
robber-and-people-connected-with-the-bank
falls down, that's why bank robberies are
unacceptable for society.

The pleasure of one single man has a finite
magnitude.


The pleasure, the happiness or so represents the
degree of completion of behavioral goal of a
man - a higher pleasure and lower displeasure.

A single man cannot be “infinitely happy”, that's
why one who got more happy in the expense of
several made a little unhappier could result in a
lower total sum of satisfaction.
...
And another section, which is more meta-physical and goes further than H. and L. work:
...

6. Meta-Physical reasons for reward and punishment

40. Sometimes the upbringing is a pure “animal
training”. For example “one should enter here
only with a hat on, there – only with a hat off;
the fork should stay over here, the knife – there;
you should fold the serviette exactly like this
etc.
It's “an animal training”, because if you have not
been taught that not obeying this rules is “ill-bred”,
you would not feel remorse, would not be
ashamed etc., because in this circumstances the
displeasure comes not from the substance of
the rules, but from the breaking a rule that
should have been obeyed
, according a particular
prescription, created by someone.

According to the Theory, Universe is a perfect
computer
, always following its rules (laws)
without any mistake. That's why the errors in
the  operation
of the derivative computers (for
example men), that is the occurrence of
undesirable condition is perceived as “breaking
a Universal law”
. This brings displeasure,
because the goal of the derivative computers is
to imitate Universe – [and this includes] not to
make any mistakes. [To execute exactly their
program, in their domain.]

If a well behaving one, a trained like an animal
sees someone else who does not obey an
artificial rule
such as putting a hat off, he
would feel, the well behaving one feels a
discomfort, a displeasure caused by the
difference between the expected == predicted
and desired, and the reality. This difference is
the “mistake” that results in a displeasure.


Another work, which was translated in 4 chunks, the original work consists of a paper, a short comment by another person, then two very long answers which are like new bigger papers, covering more topics.

7.  Local maximum, biggest sum of pleasure for a given time ahead (...) - a multi-topic paper from 2004

Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence

Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.

Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.

Intelligence: A Search for the biggest cumulative reward for a given period ahead, based on a given model of the rewards. Reinforcement learning.

Part 3 - first published 3/2004 at bgit.net, and in "Sacred Computer" e-zine in Bulgarian
http://artificial-mind.blogspot.com/2010/02/causes-and-reasons-for-any-particular.html

Part 4

The purpose is a GOAL; a state, where the Control Units aims to. The "thirst for a purpose/meaning" is a search for local maxima, indicating to the Control Unit how close the goal is. [E.g. - an orgasm... ;) ]
(...)
I think that's what human does and I think [as of 2/2004] this is "mind" [intelligence]: search for pleasure and avoidance of displeasure by complex enough entities for a given period ahead [prediction, planning]; the "enough-ness"" is defined by the examples we already call "intelligent": humans at different ages, with different IQ, different characters and ambitions.
The behaviour of each of them could be modeled as a search for the biggest sum of pleasure (displeasure is a negative in the sum) for a selected period of time ahead, which is also chosen at the 
moment of the computation.

(...)

PS. When I was about to write "a local maximum" I was about to say also "for a close enough neighborhood" (very short period), influenced by the Mathematical Analysis I've been studying lately, however mind is much more complex than the Analysis, i.e. it's based on much more linearly independent premises, that is impossible to be deduced from each other.

It's not required the period for the prediction to be short enough, because like in the example with the chocolate, caries and the dentist, immediate actions can be included in functions that give result shifted far ahead in the future; at the same time, the same action can be regarded in the immediate future, in the example situation, the eater will feel ultimate pleasure in the following one second.

One amongst many functions can be chosen or constructed, and they all can give both negative or positive results, depending on minor apparently random details - the mood of the mind in the particular moment; which variables are at hand, which the mind has recalled of, what the mind is thinking of right then..
(...)
 ////Notice: The period ahead, locality, period chosen at the moment of computation (it can change)///

See also the other parts of the work, which cover many things, including locality of the rewards and the specifics of the environment, what rewards can be taken for the very near locality.


http://artificial-mind.blogspot.com/2010/02/motivation-is-dependent-on-local-and.html

Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.

Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.



http://artificial-mind.blogspot.com/2010/02/causes-and-reasons-for-any-particular.htm
E.g. if you are a millionaire, you can still buy a donut or candy from the street, and you can still feel this as the most rewarding and good thing to do at the moment of making this decision, instead of walking by. A naive opinion is that "a millionaire wouldn't do it, because he only goes to fancy restaurants etc." - see more in the text.

8. Extension of the Bounded rationality

My reflections in the long multi-topic "work in 4 chunk"s from 2004 seems to be related to the Herbert Simon's economical theory for which he has a Nobel Prize - "bounded rationality" and "satisficing", I've not familiarized myself with original papers about it so far, only short mentions, now that I see a more specific definition there:

Regarding the interpretation here: http://www.economist.com/node/13350892


The Economist, on Herbert Simon's theory
"...Simon maintained that individuals do not seek to maximise their benefit from a particular course of action (since they cannot assimilate and digest all the information that would be needed to do such a thing)... Not only can they not get access to all the information required, but even if they could, their minds would be unable to process it properly. The human mind necessarily restricts itself. It is, as Simon put it, bounded by “cognitive limits”."

Todor Arnaudov
I see that my work has added an extension of that theory.

 The cognitive limits are true, the section about the attention that is a resource that have to be managed is also true, however I challenge that the restriction and "sufficining" is only due to them. Obviously there are also physical limits, one universal resource - time. Attention is  assumed as mapped to it, but time is also a cost for
traveling. There are also more costs, which are neither monetary, nor time, and are not properties of the products, but are crucial and are not mentioned (see below).

Finally there is one other issue with the agents, which makes the usage of a global reward function, global maximum of something, unreliable:

There are no non-trivial global undeniable single optimum external rewards for intelligent agents as complex as humans* (see note)

The agents
often don't know the optimum and cannot compute it ("don't know what they want") - there are contradicting or vague rewards and optimums, and dependencies regarding the current state of the mind. The agent doesn't always know what's optimal, what's the exact value of reward he would get (and he may change his mind right away after the decision). Take into account that mind is not integral and there's no intrinsic integral self, but an integral of self. (See... )

An individual can't decide "rationally" also, when:

- He knows that anything from a set of possible actions could make him feel good and will not hurt him (the donkey with the two haystacks)
 - Because there are different domains and different incompatible rewards running in parallel, different "virtual control units", different resolutions, different time-spans to make a prediction about expected reward, and the agent may expect also many unpredictable reactions from the environment which may change the reward to a punishment, if some events happen some way.

In Bulgaria people say: "You don't know what you win, when you lose, and what you lose, when you win."

What's better - 2 kilos of bananas or 1.5 kilos of apples and 0.5 of oranges, if the prices are the same? Or why not a kilo of bananas and a pint of beer? Or a candy, a kilo of potatoes and a toothbrush?

 It depends on full history, on some random occasions such as where exactly you are, what have you discussed in this very moment, how far are you from the markets/shops, are you busy and how much, how expensive is traveling for you as a matter of time, money, expected troubles, boredom etc.

 What's better - to spend 34 minutes to go to a shop, where the product costs $X cheaper, or to spend more money, but not to travel? How cheaper should it be, to travel how far? If that includes transportation costs, or if it doesn't? When? In what conditions of the traffic or the streets (snowy, icy, rainy... traffic jam, no traffic jam), what weather, what mood of the agent, is she alone or go shopping with friends, what other tasks he has to complete, how important they are, what age he is, how healthy he is, etc. etc. Those all, at every very specific moment modulate the behavior.
One of my claims is that human decision in the reward-framework of mine should be local, some local maximum of reward (pleasure) are searched of some particular virtual causality-control units (among many in the mind), the ones which are in charge.

 Often there's not a well defined global maximum and best choice for the entire agent, only for the sub-units for small periods, and except for trivial cases when the agent's desires are sharp and simple enough, and when the resources are far greater than the prices to get particular rewards.

 See also: 
http://artificial-mind.blogspot.com/2012/11/nature-or-nurture-socialization-social.html

http://artificial-mind.blogspot.com/2011/10/rationalization-and-confusions-caused.html


9. Physical vs Cognitive rewards

Notice that that's one part of my definition, that's the "physical" reward, it's a coarse grained neurotransmitter-hormonal-subcortical driver of the neocortex, it's generally related to the basal ganglia reward system, but also other neurotransmitter emitting and emotion drivers such as the amygdala, PAG,  hypothalamus.
This reward system, in fact all those different coarse-grained neurotransmitter/neurohormones/gross operation is conditioned/connected to the "higher, finer grained one - the"cognitive" one, which is also about prediction and can be expressed in terms of rewards, but the rewards are different.


One of the differences is that the physical system is about matching with the desired, the cognitive is about matching with the predicted.

The cognitive system also predicts the amount of physical reward, and the physical rewards cause "glitches" of the cognitive, it's the reason for the apparently "irrational" behavior of humans and one of the sources of confusion regarding the notion of "free will". See many discussions on this in past works.

10.
The period ahead should be short enough for the target resolution, range, domain etc.
 Regarding the period ahead.. Actually it should be "short enough", but "short" can have different meanings. Short enough for the target resolution and range; in the target domain etc.

It's not always short in the literal continuous terms of the Calculus if put in one resolution, but it can be spread in different resolutions, and each should be short enough and smooth enough for the prediction capabilities at the particular level of abstraction/selection of rewards and the particular resolution. See the work for the example with the chocolate, caries and the dentist.

* There are no non-trivial global undeniable optimum rewards for intelligent agents as complex as humans
There can be, of course. The true drives at micro level are neurotransmitters, and there are simple goals: infinite pleasure for inifinite amount of time - in Heaven. This is abstract, however , and before going there, many micro actions have to be performed, which are specific and fall in the above cases. Physical pleasures are or should be bounded, and they should be diverse and varied enough - "free economy" instead of monopoly, - in order the agent not to get stuck to elementary  self-sustaining cycles as in drug addiction. In fact in reality there are such self-sustaining cycles and "wirehead" behavior, but it's less evident - the cycles are longer, there are overlaps, variations etc. If human behavior is studied in more low resolution terms though, humans are wireheads, and that's why Freudian etc. theories about "eveything being driven by sex" are so successful. In fact it's driven by dopamine, oxytocin, adrenaline, ... name them, which have their easiest ways to be produced, which are love, sex, games, gambling, etc.   See also: ... ..

11. Defining Universal Intelligence, Turing test faults, better tests...
Hutter and Legg 
(bold by Todor)
"...Universal intelligence spans simple adaptive agents right up to super intelligent agents like
AIXI, unlike the pass-fail Turing test which is useful only for agents with near human intelligence.
Furthermore, the Turing test cannot be fully formalised as it is based on subjective judgements.
Perhaps an even bigger problem is that the Turing test is highly anthropocentric, indeed many have
suggested that it is really a test of humanness rather than intelligence. Universal intelligence does
not have these problems as it is formally specified in terms of the more fundamental concept of
complexity...."

Todor Arnaudov

First a few notes on "formalness".

I don't think the observation, reward action  and the normalized reward in this form of letters o, r, a and [0,1] is more formal - clear and explicit - than the way it is stated in my works.

On the rest from this quote:


".... " Todor, first published in 11/2001 in "Sacred Computer" e-zine*Turing tests has many weaknesses. It works only for "experienced" enough artificially intelligent agents, and even they could be easily revealed, if one asked them questions related, for example, to their parents or childhood. In order to slip from this awkward situation, the MM should either lie, or have information about its childhood being "suggested" primarily.
A thinking machine that's too fast would have to be artificially slowed down, because the instantaneous answers would reveal its non-human identity. A slow thinking machine, that would need one full minute to answer even elementary question will also be immediately recognized. Inhumanly complicated thoughts of another thinking machine will also suggest human evaluator that he doesn't communicate with an individual from his specie...

Turing tests has too many "humanized" requirements. [Also] It is not required to know everything in order to think. The machine shouldn't have to "lie" to humans that it's a human, in order to prove, that it's a thinking being! When determining machine's intelligence (thinking capacities), capacities to learn and to process information, relative the quantity of its current knowledge, i.e. to measure not only its absolute intelligence, but also the potentials of the machine.

Thinking (intelligence) doesn't appear at once. It's a result of development, which is apparent also in humans. Ain't the first steps made, and the first words uttered by a one-year old child, compared to her helplessness one year ago, displays of a progress? Namely that's what we should search in the machine - development, [because] thinking is a progress, and not knowing-all.

Perhaps many principles of operation of a thinking machine can be made. Some of them would be better than the rest by their:
- simplicity
-  requirements regarding the computational performance of the underlying programming system
- initial amount of information in the system when it's turned-on [less is better]
- etc.

If we wish to verify/evaluate how "good" the "intelligent algorithm" that we have developed is, another way to do it is by putting the system in a very "tough" (hard, difficult) information situation, i.e. to let it develop with maximally narrow information input flow.

The capacities of the machine to "get smart"/get intelligent under such extreme conditions determine the potential of the AI [agent]. The less data are required in order an AI agent to get intelligent, the more it resembles to the human intelligence and to the "ideal AI"/"perfect AI".

I define the "ideal AI" as the simplest and the shortest algorithm, that possess minimal initial information, and requires minimal input information, in order to develop to a thinking machine. Other kinds of "ideal AI" can be specified also - ones that need the least amount of computational hardware,  that use most efficiently/optimally [originally written "in full"] memory, the ones having the highest relative performance (speed) etc.


Human brain can be taken as a starting point and an example. Without sight or hearing, and even without both of the main human senses, the man (the brain) can become intelligent. An example for that are the deaf-blind people. [There are ones who have University degrees - without seeing and hearing anything.]

Thinking machine IS NOT an artificial man

Recognizability of the "artificialness" of someone doesn't diminish his thinking (intelligence)! It is not necessarily to strictly anthropomorphize the machine. Perhaps it's not surprising that man, believing that he's a creature made in the image of God,  is inclined to create a new creature in his own image. It would be wonderful. However we shouldn't introduce our  own "technical" and principle faults (weaknesses) just to make it more resemblant to us. There's room for both humans and thinking machine in the Universe [see note 2]. The thinking machine won't be limited by the Earth like us [see note 3]. They would be capable to function even without oxygen supply and the weightlessness of outer space or super-gravity will not harm them, during acceleration or on cosmic objects that have higher gravity than the Earth's one. The machines will be more resistant to radiation, higher or lower temperatures, will have more senses etc...


12. Memories and background of the author in 2001...

     An interesting detail is that I was 17 years old in the beginning of 11-th grade at the time of that "Man and Thinking Machine" paper, which I consider the Bulgarian AGI and "Cosmist" Manifest on the Internet. There was one earlier "manifest", an essay that I sent for a "millenium competition" in the end of 1999, but it wasn't published. :))

I had read a very few popular AI, psychology and neuroscience books which at best were from the 80s, often early 80s. - the titles and short notes are still in my reader's diary. :) Many were first published in 60-s-70-s, and the biggest "wikipedia" from that time for me was a Bulgarian encyclopedia printed in 1974. Some highschool textbooks, for example in Biology, were more recent, from the 90-ies.

Unfortunately I couldn't find more contemporary literature in the libraries to which I had access and of course the newest research literature doesn't go to normal public libraries. I knew about a "central scientific library", but it was in Sofia (I was in Plovdiv), that was prohibitive itself, and I doubt that I would have been allowed to go there and use it.

There was one nonsense rule that prevented me to use even the biggest local library before certain age, because they didn't allow children below 14 or 16? to subscribe. We once went with my father, and the woman said: "Isn't he too little? Go to the children section!. What an insult! it was a poor small house near by... Not that the central library happened to be so much better regarding contemporariness and my country and the libraries were in a poor condition since the socialists fall in 1989. They had some items from late 80-es and 1990 or so, rarely more modern technical books on programming, software products.

Anyway, it has all been enough for me to see that most of the AI part was "a wrong direction". And as later found, the "AI: Modern Approach", a book I've heard about as "a classic", wasn't modern at all, so maybe I haven't missed a lot by not reading it back then.

What about the Internet? - Well, I have managed to arrange access from my home since one year. Probably there might have been pirate books in English and Russian, especially on Russian servers, but I don't know. I didn't use e-mule etc. though (Napster - neither).

My PCs at the time was poor, as of 2001 it was a Pentium 90 MHz with 16? MB RAM and ~850 MB HDD, and a VGA monitor of 640x480 :D, at least it was with a flat CRT screen. (I upgraded to 32 MB, but I don't remember when exactly). CD-Recorders were too expensive, so I was constantly in debt for disk space and couldn't afford downloading much. The Internet speed was also poor, 3-4 KB/s was a good one    through a dial-up through the phone. And you can be online from time to time, mostly during the nights, due to the lower prices and more free lines.

What about my English - well... I've always got an A at school :-D, so I did use it to browse the net, but  I  didn't really search/studied seriously scientific papers etc. at that time. I was busy to do, see and find so many other things. After all, I  knew it and invented it myself - why go read others' works so much!?... ;)

...Continues...

Appendices


Appendix 2002-1

Original text from the abstract of "Teenage Theory of Universe and Mind, Part 2" (Conception about the Universal Predetermination)

 Из резюмето на "Схващане за всеобщата предопределеност, Част втора", публикуван за първи път в сп. "Свещеният сметач", септември 2002 г.

(...) Стремежът към по-малкото неудоволствие - двигател на човешкото поведение. Първичното удоволствие е желанието да бъдем "сити" и "сухи". Впоследствие удоволствието се свързва и с "висшите отдели" на кората на крайния мозък, при което освен прякосетивното усещания на удоволствие-неудоволствие, след обучение, се създават и "мисловни" удоволствия и неудоволствия: радост-тъга, гордост-срам, удовлетворение-неудовлетворение, щастие-нещастие и пр. 
"Инстиктът за самосъхранение" е само частен случай, при който човек смята, че животът му носи по-малко неудоволствие, отколкото смъртта би му причинила. След като смъртта се осъзнае като "по-малкото зло" от живота, човешката целеустременост се насочва към нея. (...) 
Appendix 2001-1

Оригиналният текст от "Човекът и Мислещата Машина", (Анализ на възможността да се създаде Мислеща машина и някои недостатъци на човека и органичната материя пред нея) /Първа статия от поредицата "Мислещата Машина"/, публикуван в сп. "Свещеният сметач", бр. 13 (11)/2001 г.
(...)

Мислещата машина НЕ Е изкуствен човек
Разпознаването на "изкуствеността" на някого не омаловажава мисленето му! Не е нужно непременно ММ да се антропоморфира. Сигурно е естествено човекът, смятайки се за същество сътворено по подобие Божие, да изпитва влечение да сътвори ново същество по свое подобие. Това би било прекрасно. Не бива обаче да внасяме в ММ своите "технически" и принципни недостатъци само за да прилича повече на нас. Във вселената има място и за хората и за ММ 2. Мислещите машини няма да бъдат ограничени като нас от Земята3. Те ще могат да действат и без наличие на кислород, космическата безтегловност или свръх-тегловност (ускорения, косм. тела с по-голяма гравитация) няма да им влияят зле, те ще са по устойчиви на радиация и високи/ниски температури, ще имат повече сетива, и т.н..

 ....Тестът на Тюринг има много слаби места. Той работи само за достатъчно "опитни" изкуствени интелекти. Но дори и те биха могли да бъдат лесно разпознати от човека, ако той им зададе въпроси свързани, например, с техните родители и детство. За да се измъкне от неудобното положение, ММ ще трябва или да излъже, или информацията за "детството" й да бъде "внушена" предварително. Скоростта на прекалено бърза ММ ще трябва изкуствено да се забавя, тъй като мигновените отговори ще издадат машината. Бавните ММ, които се нуждаят например от минута, за да отговорят и на най-елементарния въпрос също ще бъдат разпознати на момента. "Нечовешки" сложните размисли на друга ММ пак ще подсетят човека, че не общува със същество от своя вид... 
  Тюринговият тест има прекалено "очовечени" изисквания. Не е необходимо да знаеш всичко, за да мислиш. Не е необходимо машината да "излъже" хората, че е човек, за да докаже,че е мислещо същество ! При определяне на машинните мисловни капацитети биха могли да се проверяват уменията на машината да учи и да обработва информация, спрямо количеството на нейните текущи знания, т.е. да се измерва не само абсолютната интелигентност, но и потенциала на машината. Мисленето не се появява изведнъж. То е резултат на развитие, което се забелязва и при хората. Нима първите крачки и думи изречени от едногодишното дете, сравнени с безпомощността му преди година, не показват развитие? Именно това трябва да търсим у машината - развитие, мисленето е прогрес, а не всезнайство. 

  Вероятно могат да се сътворят огромен брой принципи на действие на Мислеща Машина. Някои от тях ще бъдат по-добри от останалите било по простота, било по изисквания към бързодействието на използваната програмируема система, по обема на началната информация в системата и пр. Ако искаме да проверим колко е "добър" създаденият от нас "разумен алгоритъм" можем да поставим машината в много "тежко" информационно положение, т.е. да я оставим да се развива при максимално стеснен приток на информация към ИИ. Възможността да "поумнее" при такива изключителни условия определят потенциала на ИИ. Колкото по-малко данни са необходими на един ИИ, за да стане разумен, толкова повече той се доближава до човешкия и до "идеалния ИИ", под което разбирам възможно най-прост и кратък алгоритъм, който притежава минимална начална и се нуждае от минимална входна информация, за да се развие до Мислеща Машина. Могат да бъдат посочени и други видове "идеални ИИ" - нуждаещи се от най-малко апаратни средства, използващи най-пълно паметта, имащи най-високо относително бързодействие и т.н. 
  Човешкият мозък може да бъде взет като отправна точка. Без зрение или слух, дори при липса и на двете основни човешки сетива, човекът (мозъкът) може да стане разумен. Пример за това са сляпо-глухите индивиди.


Appendix 3:

There's a short discussion on the matches with Hutter's works in that translation, too:
http://artificial-mind.blogspot.com/2010/02/intelligence-search-for-biggest.html
Read More

Wednesday, December 19, 2012

// // 1 comment

Five Principles of Developmental Robotics... Matches of Todor Arnaudov's works from 2003-2004 to a 2006/2009 academic paper ... Yet another one :)

This post is regarding the paper: 

Five Basic Principles of Developmental Robotics, 2006,

posted Sep 2, 2008 10:30 AM by Brian Tanner   [ updated Sep 2, 2008 10:48 AM ]
http://prw06.rl-community.org/posters/five-basic-principles-of-developmental-robotics by Alexander Stoytchev, Department of Computer Science, Iowa State University

Also in a 2009's IEEE edition, with extensions based on Stoychev's PhD thesis:

" ... Some Basic Principles of Developmental Robotics
Stoytchev, A.
Page(s): 122-130
Digital Object Identifier 10.1109/TAMD.2009.2029989

Abstract: This paper formulates five basic principles of developmental robotics. These principles are formulated based on some of the recurring themes in the developmental learning literature and in the author's own research. The five principles follow logically from the verification principle (postulated by Richard Sutton) which is assumed to be self-evident. This paper also gives an example of how these principles can be applied to the problem of autonomous tool use in robot "
...

Yet another academic paper that I found recently, which is published years after the works of mine and is based on a bunch of other works,  and in the academic eyes those are "new contributions".
Sure, they are novel - in the community to which they are presented and their subculture, and in the specific way they are presented.

Not that I claim plagiarism or something, the case is rather "great minds think alike" ;)), I recommend the paper.

The differences are in the social position and status, background, resources, support - $$$, peers, access to literature and conferences, - and experience needed to made those claims.


Those seem significantly in my favor, as a poor teenager in high school... :)) 

Of course it doesn't sound plausible that an institutionalized researcher, who's well fit into the mainstream system to go read "crank's" works on the Internet - "who are they" - if a paper was not published in a conference or a journal (costs $$$ to go to a conference etc.), it's like it didn't exist. I discovered this back as a freshman... That particular paper is grounded on other papers from the "system", including author's own. Also who's gonna read "crank's" (high-school students') works published in Bulgarian - non-Bulgarians are highly unlikely to have ever known of my existence up to later years.

Well, in this particular case, though, there's a chance that the author of that paper has known about my works anyway, because he's a fellow Bulgarian, and there was a forum where a few of us, "the cranks" and other enthusiasts gathered for a while in 2004 - the forum of the so called "Project Kibertron" for a generally intelligent humanoid robot

Let's get to the point:

Autonmous Mental Development


I found this in the "subculture" called "Autonmous Mental Development (AMD) - or Developmental Robotics (certainly the more popular term). That reminded me of those "split brain" academia, there exist subcultures, groups, which don't know each other good enough and may produce similar results, or results which are of help to the others. For example... Well - later about that. ;P
...
  • The Verification Principle (credited Sutton 2001)
  • The Principle of Embodiment
  • The Principle of Subjectivity
  • The Principle of Grounding
  • The Principle of Gradual Exploration
Those principles are explicitly stated in one way or another in "points" or claims about how a universal mind/human mind is supposed to work according to my "Teenage Theory of Mind and Universe", with its pique in the works in 2003 - early 2004.

Some translations of a part of my classical works

http://research.twenkid.com/agi_english/

Other not translated:
http://eim.hit.bg/razum
http://www.oocities.org/eimworld/  (Windows-1251)

If I am to state the precise points and matches, I'd write a more formal paper later, but let me give just a few short examples:

For example, the Sutton's "verification principle" and all the rest are obvious for a sensori-motor generalizing self-improving system, one of the explicit statements in my works is the "match", the way "truth" is defined and found.  (I'm "late" to Sutton here, but I haven't heard of him, as that community haven't heard of me.)

From Universe and Mind 3 (2003), Universe and Mind 4 (2004):
...
50. The truth is a match – if the knowledge (or
confidence, belief, persuasion [, desire])
matches something that is perceived somewhere
else later, then the new one is true, compared to
the old one; on the other hand, if the new one is
different, it's “a lie” (false) or it becomes truth
and the old truth turns into full or partial false,
depending on how the new truth is different
from the old one. The more the newly evaluated
for “truth” input piece of knowledge [pattern]
matches a piece of knowledge [pattern] from the
memories of mind, the more it's “truth” and
“actual”, according to mind. Therefore,
determining a “truth” is a determination of
difference between past and wanted present.
(“Wanted” was missing in the Part 3 writing,
added here in Part 4).
[“Truth” in Bulgarian is “Istina” (истина)]
Interestingly, in Serbian “isto” means “same” -
it has morphological association to “same”,
because the statement that a given feature is
“truth” means also that:

TRUTH: The feature [specifics, detail] that is
being evaluated matches the pattern/template - it
is the same as in the pattern, at a given
resolution of perception. (*That's a definition of
mine.)

Stoychev mentions the philosophy school of the logical neopositivists as an origin of the "verification" principle. My opponent and co-author of my 2002 epistolary work "Theory of Universal Predetermination II"* (Universe and Mind, Part 2) - Angel Grancharov - who was a professional philosopher having also University level teaching experience and an author of many books, was "insulting me" for being a "positivist" and explaining me how "flawed" that philosophy is, and that it's not really a philosophy. I didn't know that there's such a school, I heard about that school for the first time in those emails.

The quote above is about the subjectivity, embodiment, grounding and gradual exploration, all in one sentence: "...The more the newly evaluated for “truth” input piece of knowledge [pattern] matches a piece of knowledge [pattern] from the memories of mind, the more it's “truth” and “actual”, according to mind", "at a given resolution of perception".

"Grounding" is related also to the notion of "reality" as the "lowest level of virtual universe", and statements that for any system there's one lowest level, from where generalization starts.

Etc.

...The series will continue with a few explicit marks/comparisons of the matches between my classical works to Jeff Hawkins's "On Intelligence" and the HTM, which are published after mine.

See also: http://artificial-mind.blogspot.com/2012/12/compression-and-beauty-matches-between.html
Read More

Monday, December 10, 2012

// // Leave a Comment

AGI Researchers: Unite! Thinking Machines are Coming Really Soon.. Act NOW!

In the late years, and increasingly with time, I fail to see any meaningful conceptual or technological reason for thinking machines not to exist already... I've claimed it before and I claim it again.*

I dare to say that one of the reason why AGI has not been implemented yet are my own distractions and other problems which prevented me to work. The time is ticking away, though, I see crucial breakthroughs next year or in the worst case - in the next few years, and I'm switching to working mode, so look forward to see them...

We must act, unite, incorporate and work hard NOW if we are to be one of the pioneers.

Otherwise - the rich and heavy armored batteries of the rich institutionalized researchers, funded by our taxes, DARPA, and big rich companies will push it and we will stay marginalized and get forgotten, even though we were ahead of them (see my previous post about my hypotheses from 2003 and a 2011 academic work)

Do you know IIT?

Just look how big they are:

http://www.iit.it/en/research/departments/robotics-brain-and-cognitive-sciences.html

http://www.iit.it/en/rbcs-labs/action-and-perception-lab/perceptual-and-motor-bases-of-prediction.html

Italy is really pushing this field. I've marked the "danger" from the EU IM-CLEVER project some time ago, a colleague once told me how clumsy it's it in the real live, 24-blade server? or something, but they got correct research direction.

IIT is another big "danger", I think they are the creators of the iCube robot, they have a huge exhaustive research program and correct general direction which will reach to the goal. And one even stronger reason to "be afraid" if you don't join their army :) - they are apparently equipped with the best technology and have plenty of human power.

There are at least several other very solid and rich institutes, which are touching the "right directions" in the UK and the USA, I'll mark them later (have to re-find the links, but I got "frightened" from their power .. :) )

Thinking Machines Thinkers - Unite!



Read More

Monday, December 3, 2012

// // Leave a Comment

Compression and Beauty: matches between a work by Todor Arnaudov from year 2003, and 2010-2011 new acаdеmic contributions... - comments on "Musical Beauty and Information Compression: Complex to the Ear, but Simple to the Mind" - Part I

I wrote this as a letter to the author several days ago, but it's not personal anyway, and what I've published the original reflections of the topics I mention here 10 years ago. I haven't got any answer so far [as of 9/12/2012], not that I am optimistic about getting such by the author, but I will answer with a more solid response.

[I got a response on 17/12, the author has been busy.]

I'm warming up for a new iteration over the topic and already have insights and challenges to pose [writing in progress], on second read, I see the paper has some profoundly confused concepts. In general my view matches Schmidhuber's view for cognitive beauty as compression/compression progress and balance of predictability, but there are some crucial concepts and parameters that need to be specified and are obvious in the case of music. I've shared some general thoughts in discussions on the AGIRI AGI list also. It will be continued more formally and technically in a paper, with some details that I deliberately omitted while writing this email.


(...)

I'm Todor Arnaudov, I'd say a veteran AGI/SIGI* interdisciplinary researcher,  an extreme polymath and universal artist, I do constantly progress my art, knowledge and skills in all kinds of sciences (soft- and hard-), technical fields and arts - both creative and performance, covering all sensori-motor domains. 

I'm writing you regarding your work:  "Musical beauty and information compression: Complex to the ear but simple to the mind?"that I encountered recently and enjoyed reading. I have heard earlier some short mentions of the topic from pop-sci news feeds, perhaps related to your paper (but it might have been about Schmidhuber's, I don't remember), but I've found and read your paper as late as very recently.

First of all, congratulations for presenting those ideas together!

I agree with the many claims and hypotheses, with some comments, which however I may present in a more formal form, regarding for example measurements and deviations, and interpretation of the results in regard to neuroscience and sensori-motor generalizing hierarchies (may view on general intelligence).

I write you, because I happen to have made related, similar or in cases the same hypotheses and speculations regarding general intelligence (all domains) and creativity starting in the early 2000s, mostly between mid-2002 and early 2004, which represented my Theory of intelligence and creativity (originality); it's also metaphysical and digital-physics/philosophy.

My theory is about prediction/compression with increasing range or precision, the balance (too predictable is boring, too unpredictable is "random") and also compression progress (progress of predictions' range and precision) as basic cognitive and aesthetic drives in all domains; the cognitive "uglyness" as cognitive overload and beauty/intelligence as finding simple matches/transformations.

I do also discriminate between cognitive and physical beauty (physical one is actually just pleasure), what you call  "2) stimulating the receiver through historical association" in my view is again related to physical beauty, experienced in the past, and associated with cognitive stimuli, which makes the cognitive stimuli to invoke those physical memories. Both types are often confused due to the mess in the brain reward systems (dopamine-and-other neurotransmitters and the intrinsic cognitive reward or prediction/compression), and people use to call "beautiful" stimuli which are just pleasant, i.e. produce a release of dopamine, oxytocin, endorphines etc.

I also have claimed and held that the science and arts are "the same thing"** beginning from those a decade old works, to me it has been obvious, because I've been always doing both. I don't mean Schmidhuber's "low complexity art", but "high complexity" art, i.e. normal classical art, good old fashioned drawing, artistic photography, writing, music, acting, filmmaking - all kinds of stuff, genres, forms, lengths.
I'll save the more detailed comparison and explanation of my claims and theory of general intelligence and creativity and why my works are not known or acknowledged in the mainstream academic media so far in a dedicated paper (for example, one reason is probably because I was 18-19 years old, they were published in Bulgarian in non-academic media). 

I don't know do you care at all about that, but I'll cite the following short examples of some matches:

...This appreciation "....rests on our ability to discern patterns in the notes and rhythms and use them to make predictions about what will come next. When our anticipations are violated, we experience tension; when the expectation is met, we have a pleasurable sense of release[4]. ... from
Ball P: Harmonious minds: the hunt for universal music.
New Scientist 2010., 2759: OpenURL

I've written a very similar expression [for example] in my 2003 interdisciplinary philosophical-metaphysical-AGI-creativity... work, originally called "Conception of Universal Predetermination, Part 3", originally published in an e-zine called "The Sacred Computer", written in Bulgarian, in the middle 2003. The site had a mirror on Geocities, which is now "frozen" in several copies and can be verified, for example:http://www.oocities.org/eimworld/3/25/pred-3.htm  (in Bulgarian, Windows-1251 encoding, though) 

In English it sounds like that:  (Complete work, translated in English: http://research.twenkid.com/agi_english/Teenage_Theory_of_Universe_and_Mind_3.pdf , in this copy in the email I've done some additional minor stylistic/language corrections of mistakes in the translation, that I spotted now )

[BEGIN]
45. (...)
Why do we like to dance?

Let me try to make a speculation - maybe rough, but I guess – probable.*

Dancing is a rhythmical motion of the body  -
output stream of information to muscles -
which is in some kind of harmony
(synchrony) with an input stream – music.

Changes on the input – hearing – impact the
output – movements. 

Rhythmical means predictable. After you know
the rhythm – time period after which particular
changes in sound would happen (will hear
drums, a guitar; particular tone) – you can
predict the changes [the events] in music, you
can predict the future of the music you are
listening to.

Every muscle [muscular] action is a consequence of an
operation in brain, executed earlier. The output
is in the same time an input, because in order to
flex or to relax a muscle, mind makes particular
neurons in brain to “oscillate” (to send pulses) in
a particular fashion, i.e. the output is also an
input.

The better the synchrony/harmony between
input and output pulses, the more the total
level of the pulses gets amplified and makes
us feel pleasure.  (...)

** The rhythm should be within the mental
capabilities of mind to perceive and discriminate
it, for example we would hardly feel a rhythm of
3000 beats per minute. [250 – 300 is already
really fast]

(...)



And for example this one, it's about images, but my theory is general and the claim goes for any kind of inputs:
64. (...) 
Abstract artists' pictures, where there are no

pieces of knowledge, except the most primitive

for an image do not contain any meaningful

information [or "information for mind", i.e. data

that have patterns and can be compressed].

(Such most primitive pieces/patterns are e.g. a

point, spot, line, seasow line; polygons which
mind does not associate with known objects;
without 3-rd dimension).

There is meaningful information in images
where mind can recognize and call particular
objects, parts of objects and their
coordinates/relations to each other.

The meaningful idea ("разумна представа") is
described/defined with much less information
[data] than the image from which it was
deduced. [Compression and selection.]

...

E.g. this image has a little knowledge, because it
can be easily defined with a cycle in mind,
where the changes of the bar of the building are
small.
Photographers use to call such pictures "boring"
- they bring a little amount of information,
which is unpredictable using the precedent
information [from the picture or whatever
artifact being analyzed].  [This image is very
highly compressible.]
Well done artistic photograph must be balanced: it
must bring enough information about the subject
on the picture, be clear enough – it should be
easy to understand what's on the picture, - to
have a "complete" idea; also, the picture
shouldn't be overloaded with details which are
distracting attention (are "irritating") and make
our sight to roam around the details, searching
for a meaning in the picture.
Artistic photography is based on abstraction and
emphasizing the essential [information]. It's
like that also in classical visual art, especially in
graphics and caricature.

A person creating any kind of art is displaying
how the image of what he's creating is stored in
his memory and what features of reality are
memorized. For example the earliest children
drawings use just lines - therefore [perhaps]


images are stored as "lines"*; lines bring the
biggest possible [amoung of] information, because they are
abrupt change between the colour of the paper
and the pencil or whatever tool. The youngest
mind remembers [maybe implies: recognizes]
only the very essential parts of the perceived
reality.

Humans don't remember lots of the details in the
images. Maybe that's why pictures with lots of
details are "irritating". Human mind usually
removes the most of the minute details, when it
remembers the image as a description of pieces
of knowledge, in order to make remembering
easier.

[* Lines [curves] are also the easiest to draw, poor motorskills are also involved.]

It's easier to fill memory using a piece of
information [a pattern] to be multiplied with a
gradual change, known from the beginning; and
to be able to predict, by parts of the image, with a
higher probability, what's on other parts of the
image. 

[That's easier, than to hold a bitmap copy.]

Human hardly can remember and perceive too
"plentiful information servings" such as noise:

NOISE: A sequence of characters/symbols,
where every following has equal probability to
appear and the value of the following characters
is fully unpredictable for the one who's trying to
predict it [i.e. the *generator* may actually
*know* all the symbols and they might be
pseudo-random for him, but the *evaluator*
doesn't know them and can't predict them].

Mind works with big amount of redundancy in
information, in order to predict the future inputs
by the past easier and with higher certainty
[Aesthetics is not that simple to define in a few sentences and is subjective. There's "organic/smooth" patterns that seem to make perceptions generally more pleasant than highly compressible high contrast images; also there are high level associations, that pictures bring to us and may make us judge them emotionally etc.; "animate" objects and conflicts like in drama may engage mind to think/predict (interestingness)...]

That parallel part in the original, below the BW picture with the list of words, which I've missed to translate in that edition says:
This picture [of the old house] from the town of Kalofer, ( © Tosh 1994)
occupies just 10% more space on my hard drive* 
[than the picture of the other building], however it brings much more meaningful information.
One can recognize:
- a grass field
- bushes - branches, leaves
- a tree - stem, branches, leaves
- house - roof with tiles; second floor with a terrace, having a rail; other windows on the second floor (each of them is a "piece of knowledge in the picture)
- pillars
- windows on the first floor
- pavement
- reflections on the pavement
- water covering the pavement [causing the reflections]
- that water implies rain
- children (bottom left)
- a bench (in the middle) built of two supports and a board [actually two benches]

[END]

Discussion:
We can apply the same explicit labeling to the picture of the rectangular building (my high-school), but then it would be very short:

- a
 sky- a rectangular building- windows (the same type)- several floors- repeating bar of windows, put in perspective, but it looks "flat" because is on the same plane, there are no big perpendicular planes like in the other picture, which looks in 3D
It's more "boring" than the old house, because one bar can be multiplied with application of perspective throughout the picture. In other words, many parts of the image are "linearly dependent", and the compression progress ceases to 0 very quickly - for mind, the compression is much higher than for jpeg, it's too high. 
Now I see that in fact this 10% covers approximately the difference of the pictures' resolutions (the BW pictures is slightly bigger), which emphasizes the 
differences in "meaningful knowledge".

Well, that all is inspiring me to do another, more elaborate and detailed iteration.

Best Regards, Todor "Tosh" Arnaudov 
*SIGI - Self-Improving General Intelligence, my term for "Seed-AI"
** A T-shirt caption, designed by me, that says "Science = Art + Fun" :)
http://artificial-mind.blogspot.com/2009/07/blog-post.html
Read More

Friday, November 30, 2012

// // Leave a Comment

Nature or Nurture: Socialization, Social Pressure, Reinforcement Learning, Reward Systems: Current Virtual Self - No Intrinsic Integral Self, but an Integral of Infinitesimal Local Selves - Irrational Intentional Actions Are Impossible- Akrasia is Confused - Hypothesis about Socialization and Eye-Contact as an Oxytocin Source

Аn article, inspired as an answer to a post by Russell Wallace at the AGIRI's AGI List, "Killer Application"

Need to put explicit links to some of the references, if anyone cares: ask, and I'll update it, otherwise I'll add them later.

(C)  Todor "Tosh"Arnaudov , 29/11/2012***
 
Personal AGI Coaches

I agree that one of the early applications of thinking machines will be managing, advising, coaching, suggesting and assisting in all kinds of human activities ("intelligence/mind augmentation"), I myself am working in this direction too, and there are already such applications - Siri etc. are such.

I'd say that the simplest form of such "assistants" and memory augmentations are all kinds of writing, the "to do lists", having a daily-routine, then computer programs, the now obsolete electronic organizers starting from the simple PDAs with alpha-numerical keyboard to put in notes and phone numbers etc.

Working and playing with somebody is better than by yourself, but it depends on the partner also, I don't think it's that simple and straight.

I have experienced personally cases when engineers work better without frequent supervision and intervention (in case of subordinate relations, not peer-to-peer), there's also a business wisdom of delegating responsibilities to the employees in order to make them more confident and more productive, otherwise they would be more dependent and would ask for frequent or immediate feedback, which in some cases is too much of an overhead.

Peer-to-peer interaction can also has negative outcomes to productivity and cause distraction or chatter. So it depends...

...
"Akrasia" as doing something "against own good will" is Confused


As of the "akrasia", I'd partially challenge the concept. IMO the philosophical confusion comes from the lack of physiological knowledge, wrong assumption and overgeneralization. There's no integral self, i.e. the brain is not an integral system.

It self-organizes and integrates the parts, because they are connected to each other, but this happens at the expense of "bugs" and apparently "irrational" behavior, because brain was not created at once and those integrations and effects were not planned.

Body and repeated sensations of self integrate "self" in the POV of the prefrontal cortex, and of an external observer. However there are many competing subsystems that are patched over each other, the highest level "executive function" is strongly influenced and entangled with older systems, which creates a mess of mechanisms and motivations. The limitation of the body actuators (and of the basal ganglia) reduce the possible physical actions and make the body appear as having an integral personality/mind/soul.

Philosophers who are searching for a global and valid-all-the-time non-contradicting integral "will", "moral", "good will" for all possible cases, face those paradoxes of "doing something against one's better judgment" (as cited in the Wikipedia article).

Integral of Infinitesimal Local Selves over given Period...

Current Virtual Self - A Snapshot of the Virtual Simulators in the Brain


I've discussed in (see... 2002, 2003, 2004, Analysis of the sense... ) that if you do something intentionally, that means without your hands being pulled with a wire from another explicit causality-control unit (an agent), or without another agent to force you with a loaded gun etc., then that's what your current virtual self/"will" has chosen as the best action given the experience and the possibilities it understands, and given the time-span and rewards that it sees from its own perspective, at this very specific moment of decision/action, computed for a selected time-period etc. That self is virtual and "exists" at the moment of acting, e.g. moving your hand, grasping something etc. In the next moment there might be another virtual self, which has other goals and motivation, which are valid for the next moment, but they might be "inconsistent" with the past or the next, because the underlying model is covered under the skull and in the long history of experience.

An analogy can be an Integral of Infinitesimal Local Selves, in Calculus terms - a Calculus of Self...

Sometimes, for some cases, in some situations, different virtual current selves match and are/appear as stable, because the set of possible actions is limited, and because brain has also stable parts and configurations as well (at certain resolution), but the point is that "irrational" and "not-consistent" actions are not really such. I claimed in those papers, and still claim, that "irrational voluntary action" is a nonsense.

If something seems "irrational", that means that the observer hasn't recognized either the correct agent, the correct "rationality" or both, or hasn't done with sufficient resolution in order to predict it right. The concept of "rational" (as "consistent") is confused and primitive.


Due to the mess in human cognitive and physical reward system *, the moral values can change all the time and the "good or bad" - too, especially if it's something "abstract", i.e. not directly linked to feeling of dopamine, oxytocin, etc. which can have very fast effect.

Some philosophers don't get it and treat self as a constant, it's like integrating a constant - it equals 0.
Brain is not abstract and constant, it's more like a complicated function - it has specific needs at specific moments, which are caused by specific sensations stored now or before 10 years in specific circumstances etc. which are associated with specific physical sensations ("gut feelings", projected eventually to the insular cortex**).

Brain constructs generalizations out of those specific experiences, but there's a lot of noise and variations, and also working and short-term memory (recent activities and experiences), the environment of every precise moment and the declarative/autobiographical memory contain many specifics, which can be called internally in a sequence that seems "random" for an external observer, while it may have it's very specific reasons, grounded in experience.

Such an observer, - who is assuming "rationality" wrongly as something that he believes is "good", "best" etc., rather than what's best for the agent's own estimate, - wrongly concludes that if somebody breaks his apparently wrong model, he acts against "his good will". NO, it acts against the WRONG model, following its own will. If an agent does something "against his will", then that's not his will.

"Will" is considered as something abstract and independent from the body, e.g. if "you want to quit smoking, but you don't", therefore "you have a weak will". In fact yes, it is separated from the body as the decisions may be initiated by the PFC, and the statements of will might be just words, while the real non-verbal actions are driven by lower dopamine-shortcuts, such as nicotine addictions.


* We've discussed this on the AGI List, see also below
** See also Demasio's works


Akrasia, as "watching too much TV, realizing that it's a waste of time" or "eating too much and not practising sports, knowing that it causes obesity" - in my opinion there are simple reasons and I don't think the reasons have been much different in the past.

Do the average people 100 years ago used to study Vector Calculus, Maxwell's Equations or did they constructed cathode-ray tubes or radio equipment or did they studied all kinds of sciences in order to make new inventions, instead of just going to the pub, theater, cinema, chatting, flirting, reading newspaper articles about crimes and random news from the world?

The reason why they didn't and why they preferred simpler "social" activities, is that the intellectual activities require cognitive profile and capacities that a small minority of the population have, and the long-term goals are hard for the mind even for the gifted ones. One reason - the relation to present or to the future present is questionable and unclear - as noted in the famous Einstein's quote about people that love chopping woods, because they can see the results right away.

In physiological terms, there are dopamine shortcuts, or we may call them short circuits - humans *are* "wirehead"s - which are making long term activities harder, at the expense of short term ones.

There are easier, simpler and cheaper short-term activities providing the desired "drugs", why shooting for something long-term that's uncertain?


The long term ones have to have some kind of immediate measurable effect in order to keep the interest and compete with the activities which provide feedback immediately. There we are some of the effects of the clumsy AI/NLP and other fields in the academia, where small, incremental and "completely provable immediately right now, with no delay" results must be presented, even if they are globally very vague or meaningless.

It's also an illustration about how bad and weak human brain's executive function is, and how pathetic working memory can be - that's one reason why we need to take notes and pin to-do notes on the wall.


Russel Wallace:>The most powerful weapon against akrasia is social pressure. Our minds
Russel Wallace:>are programmed to act based on encouragement and discouragement from
Russel Wallace:>other people, not to operate autonomously; but the old social
Russel Wallace:>institutions have largely broken down.

Socialization in its local form has simple physiological reasons (see below), besides that we are all made dependent on others' decisions and actions. We are forced to try to satisfy some kind of authority or society's need in one way or another, because otherwise we get punished or deprived of something.

I don't think social pressure is universally constructive, it's often horrible and destructive. That's how many political and religious systems have degenerated into monstrous killing and torturing machines, and the massacres were justified with nonsenses like "to save your soul", "because of God", "for the nation", "for the race and the species" and all sorts of "social" fake abstract concepts. None of this is for society, it starts from satisfying the sick brains of the freaks who led those movements, together with the neurotransmitter-wirehead needs of the masses who follow, sometimes the latter is related to the threat of adrenaline-cortisol-pain-etc. caused by the forces of the sick brain ruling that "society".

The social pressure forced the people to obey, or they obeyed, because they were too much susceptible to be ruled and to obey, even if the commands were insane.

That's also how mediocre artists can get famous, in a milder form of social pressure - they please some big amount of humans, then many other who are not artistically qualified follow just because something was already popular, sometimes in a vicious circle driven largely by low urges and distribution advantages - some "important" ones who tell what's important to the "society".

See: http://artificial-mind.blogspot.com/2012/07/issues-with-like-dislike-voting-in-web.html
Issues with Like-Dislike Voting in Web 2.0 and Social Media, and Various Defects in Social Ranking and Rating Systems - Confused and Vague Design and Measure - Psychology of the Crowds - Corrupted Society Preferences and Suggestions. In Facebook, Youtube, Twitter, TV Networks...

Also that sheep-instinct is why people like us are marginalized, and "society-pleasing" ones who have 1/2 of the IQ and 1/10 of the expertise of people like us are often high in hierarchy and commanding millions and billions of dollars. They please "the society", i.e. what they do is easily explainable in terms that a bigger majority, or a part that is related more closely to financial and other social forces would understand and accept.

In the Planet of the Apes, the banana producers will rule the world

If you do or say or present something that's too complex or requires too much brains, or will produce results too far in the future and if you can't explain it in terms of short-term dopamine, oxytocin, endorphins or something else from the "useful" substances in the audience's brains - you'll look like a mad loser, you will even look "dumb" and "unadaptive".

The ones that produce fast-foods or sweets, or porn and sex-services are encouraged more by the society, because those "beneficial for the society" agents are associated with dopamine, serotonin, oxytocin etc. - the true "universal languages" of humanity, and the dirty true needs of the average humans. In the intellectual ones, it's to a lesser extent, in the average ones - they are slaves of this part of human physiology.

That's the mystical "happiness" which average people are looking for, and that's the "well-being" of society - dopamine, oxytocin and serotonin for all.

The ones who search for a more evolved, cognitive forms of rewards and happiness, are often oppressed by society, which doesn't get it - "find a real job, lazy philosopher!", "go code web sites or accounting software", "go dig-out potatoes", "why don't you pick some stones from the stone-pit?", "if you don't have your hands dirty, that's not work". Etc.. :-D

And...

It's the technology and individualism that are trying to put the social monster (in the totalitarian violent form) to its end. It's not the society, e.g. "the psychology of the crowds". Crowds are monsters.

The person and the technology in the society is pushing society to respect the person, initially the amorphic monster of society/the state/the church/God/the king was supposed to be unquestionable ruler of everything.

Besides, while accepting the social pressure is sometimes reluctant, it's often also the "sheep instinct" for the majority of people - they just don't know what to do themselves and can't choose. That has to do with the Guppy Effect/the Ring Syndrome. We seem to be 99% males here, I suppose everybody has experienced this - if you walk on your own you may seem like a loser and be ignored, but if you walk with or talk to a woman in front of other women, especially if she smiles at you, then the other women will find you attractive and may throw jealous gazes to that other woman... That's a funny illustration of the force of the "social pressure" on women...

..

As of the physiological reasons - e.g. dopamine, oxytocin, endorphins, vasopressin and perhaps a few others shortcuts, another type of "wirehead" and another type of addiction in a more abstract sense - under stress, cortisol is high, it kills the oxytocin, you get anxious and feel bad.

Oxytocin cures it, so you search for social interaction, because you have discovered long-long time ago as a little baby, that when your mother touches and cuddles you*, and when you keep eye-contact with her, you relax and feel good.

I have two hypotheses about how the oxytocin case came to live (need to check some details deeper):

Hypothesis 1.:

1) Brain has some pre-wirings to generate oxytocin (such as cuddling and gentle touching)
2) Eye-contact is not pre-wired and not an inborn source of oxytocin, but brain is conditioned to associate oxytocin to "animate beings", which actually means "interaction" from the very beginning, because:
2'): Eyes are the simplest and the most unquestionable sign of a pattern and correlation (in my definitions, see 2003, 2004, 2010... ) related to "self-moving" objects (temporally matching inputs). Mouth/lips (smile, frown, laugh) should be also related, especially in the foundational semantics of the facial expressions, but I think that eyes are a more obvious and unquestionable correlation for unsupervised learning, and they are more powerful and directed, they can direct the attention to new coordinates, while the mouth can direct only towards itself by it's motion, compared to the static "background".

Eye contact in essence is synchronous motion/reaction of a moving object to some intentional changes, "intentional" changes means predicted changes in the environment (effectively some sensory input) to match with the real input.

That match is the cognitive part of the reward in my model of brain. Dopamine and oxytocin are means for the other type of reward: physical, that is the desired outcome to match the real input. Dopamine and oxytocin are desired, it doesn't matter how they are injected into the brain, and social interactions, hanging out with friends, petting a cat etc. are ways to get that injection.

For example, women apparently are more sensitive or more dependent on oxytocin than men, they have to be in order to get addicted to their children, and the "mother's love is the strongest" namely because they get oxytocin from the interaction with their children.

When both sources of reward - cognitive and oxytocin - are active together, they are associated even stronger.

Hypothesis 2.:

Eye-contact is also pre-wired to induce oxytocin from the beginning - there are evidences that newborns actually do see from the beginning and they can imitate faces. This must be below the neocortex - I've speculated about the thalamus, but it's just a guess.

The feed-forward part of the facial expressions is hardwired tough, that's obvious, e.g. people can't fake a smile if they're not good actors and don't experience it. See ...

Saturday, May 1, 2010
Thalamic Nuclei - primary causes for Mirror Neurons? |Human Face - Important Aspect of Evolution | Cingulate Cortex | Nature or Nurture

http://artificial-mind.blogspot.com/2010/05/thalamic-nuclei-primary-causes-for.html

On another hand, face mimicking causes the adults who see it to smile or/and to start playing with their face, so this is a case of interaction and causing predictable changes, or provoking changes in the "animate being". The babies are usually held or cuddled, which induces oxytocin due to the touches - then all kinds of interactions and play are associate that oxytocin with the dopamine, the cognitive rewards of pattern recognition, and the "animate beings", as recognized in the most basic way.

Also, when a baby is in distress, the ones who care for her would probably also be worried/won't smile, that's how a baby can associate it's distressed facial "expressions" (its gut feelings, including facial muscles proprioception) with others' facial expressions. A similar mechanism probably goes for smiling and associating own smile and feelings with the smile of others.

If distressed, the baby would be cuddled or held, which induces oxytocin, which is an antidote to the circulating adrenaline and cortisol. That's how "bonds" are created.

Another form of social "pressure", the adult's reaction and disturbance by a baby's cry. There's a popular claim, that it's "evolutionary encoded" in order the mother to care for the baby. I think that's not necessary - we might have conditioned the spectral profile of our own cries, so it's not social, but self-induced. I claimed this back in this article:  Learned or Innate? Nature or Nurture? Speculations of how a mind can grasp on its own: animate/inanimate objects, face recognition, language...http://artificial-mind.blogspot.com/2010/04/learned-or-innate-nature-or-nurture.html

...
(It might be encoded, in some of the lower nuclei, but it must be tested, for example if there are experiments with mother's reaction, if she was born deaf and started hearing in sufficiently old age).

Russell Wallace>The solution may lie in another quirk of the human mind: our tendency
Russell Wallace>to anthropomorphize; we instinctively attribute personhood to all
Russell Wallace>sorts of things - including, in some circumstances, computer programs
Russell Wallace>(2).

I wouldn't call it instinctive, but I assume that without it we wouldn't have attributed personhood to the other humans, possibly to us ourselves, too, because to the brain the self is a set of correlated sensations and mental states, the way to distinguish it from the others are some details about the feed-back and causality control (intentionality), and the consistency/stable correlations.

As of the other people - at a distance humans "don't look like humans", their angular dimensions are very small, they "don't seem to have" eyes, a face, hands, etc. They "are not humans", but points, circles, blobs moving, right?

In fact that also means "humans", because "humans" are also "points/circles/blobs moving" - the eyes and the mouth + radial and linear motion of the head, that's the initial visual image of "human" for a baby. There are experiments with babies and fake faces, initially they react like to a real face if they see a cardboard with two circles for eyes, then if it has also a nose and a mouth, and finally - if it's 3D or "real".

Then hands, arms etc. enter the equation, that includes baby's own limbs and body parts, the feedback from touching her's own body parts etc.

Since the beginning we see various humans (small, big, old, young, ...) from various distances, various positions, clothes, etc., various occlusions, various effects of their behavior. The general/essential part for the cognition is not that they are "humans", i.e. their biological substrate, DNA, all the trillions of molecules etc. but the way they appear, interact and correlate.

In order for this to work robustly, it must keep being correct up to the highest generalization and down to the lowest resolution and the fewest details, because as mentioned, it probably has started from two eyes/eye balls, which are: two matching objects, linearly transforming (moving, rotating) synchronously in parallel and reacting to intentional operations of the one who learns about humans.
...

Another case - when somebody speaks behind your back, you only hear it - it might be a record, or synthesized.

The same goes for radio or television, or for text as well - that's only sound or image or text coming from a box, or graphics on a piece of paper, there's no persons there. However it's important what it recalls from your brain, not what it is itself.

Only you yourself are an "unquestionable" person for yourself. The other stimuli which you attribute to animate beings/persons remind of *you yourself* and your own flow of thoughts, sensations, feelings, parameters.

It recalls emotions that you have experienced, thoughts that you have had - in one extent or another.

Human anthropomorphize sets of (correlated) stimuli which are similar to his own intentional sensations.

The series continues....

(C)  Todor "Tosh"Arnaudov (Todor Iliev Arnaudov) , Тодор Илиев Арнаудов - Тош 

***
First Published: 29/11/2012
 30/11/2012, - minor grammatical correction;
 2/12 - "complex function" changed to "complicated function" (in order not to confuse with complex numbers)
25-26/5/2023 - a few typos noticed (require -- requires, loose -- lose), while translating to Bulgarian from the book "The Prophets of the Thinking Machine: Artificial General Intelligence and Transhumanism: History, Theory and Pioneers": http://github.com/twenkid

Думи, тагове, tags: Философия, интердисциплинарно, невронауки, неврология, психология, вродени, средата, гени, Интегрално смятане, анализ, социология, психология на тълпата, обществен натиск, общество, личност, съзнание, самосъзнание, интегрално, цялостно, акразия, Сократ, ирационално, рационално, объркано, мозък, човешки, допамин, серотонин, оскитоцин, кортизол, адреналин, невротрансмитер, ...АГИ списък, УИР, Универсален изкуствен разум, изкуствен разум, изкуствен интелект, Decision making, взимане на решение, Reasoning, rationality, агенти, мулти-агентни системи, изкуствен интелект, интелигентност





Corrections of typos (25.8.2025): selfs - selves; endorphins, vasopressin  etc. (not "-ines"); new-borns - newborns; antopomorphize  - anthropomorphize 



Read More

Wednesday, November 14, 2012

// // 3 comments

News: TILT - Efficient Rectification (Texture-Pixel-Based 3D-Like Perspective and other) by Microsoft Research - and the SIGI-AGI Prototype and Research Accelerator News



I've discussed about the must-be simplicity of 3D-scene-and-light reconstruction, back in the 1/1/2012 article about Optical Illusions, and later on the AGI-List*, and I see it very soon coming en mass.

"TILT" by Microsoft is yet another demonstration of this coming... :X

The RGB-D sensors (such as Kinect and ASUS Xtion) that are getting cheap and popular are also a jump to that direction, they make 3D reconstruction trivial.

In the last two months I've been also directing my mind into visual-vision-images R&D for pracical developments in computer vision, graphics (my Twenkid FX studio video editing/visual effects and general movie production) and on the 3D-texture-light-reconstruction, but I've been slipping and spreading into many other more general branches, for example mathematics, physics, philosophy and even some... music.

I hope I won't be too late, but the true beginning of the implementation of my first SIGI-AGI-prototype is also approaching, I'll show it when it's ready to do impressive job. :P

Some of the system's purposes will be immediately practical and aiming to be useful as a product.

It might be the smart mind behind an old project of mine, I planned to implement a little bit of it back in 2008 as a Master's thesis, but I was too busy and eventually ended up doing something else, i.e. described my 2004's Text-to-speech synthesizer "Glas" and proposed many improvements and an entirely new architecture. That's another project that I wish to improve and make that new architecture. For example that old version is still in use for reading out loud materials for me.

I've called that old advanced project "Research Accelerator" in some old posts, because it's supposed to help general research (all domains), it's also an "Intelligent Operating System".**

A few somewhat related systems/approaches/directions (quite not fully similar, though):


  • Microsoft's agent - "Personal Assistant"

  • (see: about 35 minute), and PSearch.


  • Google's "Google Now"


  • The research field of "Intelligent Environments" is also related to what my AGI prototype is supposed to deal with, e.g. activity recognition.
    See for example: Fine-Grained Kitchen Activity Recognition using RGB-D

    Activity Recognition in Pervasive Intell...


    * I've talked about some obvious 3D-reconstruction clues even in the 2004's "Universe and Mind 4", and everybody knows them, but they are not consciously accessible to most people.

    ** Windows 7 has a function for smart prefetch of applications expected to be started by the user soon - "superfetch". My intelligent system is supposed to do, too, but in more general "superfetching sense". In general, that direction of predictive doing of things is obvious and everywhere, prediction is in the core of my theory of intelligence as well and prediction is in the essence of computing in general.

    *** AGIRI AGI List

    I've spent more efforts that I should in explanations of important concepts on the AGIRI AGI List from time to time, but I feel sorry afterward... :D The extent to which I do is a waste of time, especially since the ones who seem not to get it just doesn't get it, no matter how precise the explanation is. However I believe a lot of the materials are pretty detailed and useful if you're interested in AGI and want to learn.

    They deserve a "digest", something I've promised long ago - well, I'll do when I can, some day. If it doesn't happen soon, it may come in the next 5-th part of my old series of big works.


  • Read More

    Friday, November 2, 2012

    // // Leave a Comment

    Nietzsche is a XIX-century Transhumanist... - Философът Фридрих Ницше е трансхуманист от 19-ти век...

    Have you thought about that? :)

    The concept of Superman (Übermensch), one to whom current humans are like the apes to humans.

    Check his work for yourself, right from the beginning: http://www.gutenberg.org/files/1998/1998-h/1998-h.htm

    I won't discuss the work as a whole, the "new moral", and Nazi's abuse of the term "Superman".

    I care for Nietzsche's "Superman" from a transhumanist point of view: a superman in cognitive and functional sense.

    The "dead God" and the creation of a new god is the advent of science and technology, the work had apparently been influenced by the then-new evolution theory of Darwin.

    The "creation of God" is the "singularity", the ultimate advent of technology and the creation of either cyborgs, DNA-or-whatever enhanced humans, and a self-improving general intelligence/AGI/AI.

    Zarathustra seems to crave for fast progress and criticizes the sluggishly developing society and  humanity. I sympathize with him on that point - people are not aware of the unexpected technology that is coming very soon and will make a lot of the current things obsolete and archaic. I've shared some thoughts about the absurd archaism in some human activities, and their end in the form they exist now*, I will explain more in a dedicated work.

    Enjoy! :)

    ---
    * Of course, there's a chance that people would reject some of the technologies which will make other technologies absurd and obsolete and would make the people to question their-own superiority etc., I think it's in fact likely to happen so...

    However this is so consonant with a "superhuman-human" and a "human-ape" relations. See the movie "The Planet of the Apes" (1968) and put yourself in the human's from the future shoes  - you might be intellectually and technologically superior, but your "enemy" might be yet physically vastly stronger and more aggressive than you, especially if there's a small amount of transhumanists and a huge amount of people who don't understand and don't accept to live together with thinking machines.

     History tells about such events, when barbarian hordes annihilate civilizations and societies which are centuries or a millennium ahead of them. That's one of the unfortunate possibilities if humanity rejects the technologies and the new technological species that will revolutionize their life.
     (I deliberately don't specify what exact things I'm talking about, see in a more elaborate paper.)

    By the way, recently I found that case told in a humorous SF short story of the Bulgarian writer Lyuben Dilov. Humanity gets in touch with extraterrestrial civilization, the aliens leave an ambassador on Earth. He notices that human race has a lot of social problems and has undeveloped technologies, so he shares know-how in medicine and in an uncorrupted and fair-minded political system. The humanity rejects both. The new inventions would change the corrupted system and the status-quo and would steal the power from the corrupted politicians - nobody would agree to use it. The only technology humanity accepts are books, which can fit in a bead, so they may collect them. Well, we already have those books... :)

    ** Thanks to S.  for reminding me about the "Übermensch" concept of Nietzsche, when I've spoken with him about related social aspects of the up-coming transhumanism era, compared to the current state of the affairs.

    See alsoPhilosophical and Interdisciplinary Discussion on General Intelligence, AGI and Superintelligence Safety and Human Moral | Cognitive Origins of the Concepts of Human Soul and its Immortality | Free Will and How it Originates Cognitively | Animate Being and Soul and the Cognitive Reason for the Believe that "Thinking Machines can't have a Soul and Consciousness" |  Technology  Making us more Humane | The Egoism of Humanity | And more

    Thus Spake Zarathustra by Friedrich Wilhelm Nietzsche

    3.
    ....I TEACH YOU THE SUPERMAN. Man is something that is to be surpassed. What have ye done to surpass man?
    All beings hitherto have created something beyond themselves: and ye want to be the ebb of that great tide, and would rather go back to the beast than surpass man?
    What is the ape to man? A laughing-stock, a thing of shame. And just the same shall man be to the Superman: a laughing-stock, a thing of shame.
    Ye have made your way from the worm to man, and much within you is still worm. Once were ye apes, and even yet man is more of an ape than any of the apes.
    Even the wisest among you is only a disharmony and hybrid of plant and phantom. But do I bid you become phantoms or plants?
    Lo, I teach you the Superman!
    The Superman is the meaning of the earth. Let your will say: The Superman SHALL BE the meaning of the earth!....



    Tags: philosopy, философия, Ницше, супермен, свръхчовек, трансхуманизъм, сингулярност, прогрес, научно-техническа-революция, наука, самоусъвършенстващи се мислещи машини, мислещи машини, универсален изкуствен разум; изкуствен интелект, човешки интелект; ум, напредък; маймуни, човекоподобни, Планетата на маймуните, филм.
    Read More