Thursday, February 25, 2010

Universal Artificial Intelligence/AGI - a course in Plovdiv University from April/2010 | Универсален изкуствен разум - курс в ПУ от април (още новини)



On Wednesday I welcomed 20-some Computer Science students who visited the presentation of my upcoming course - at least 21 of them signed in. :)

This will be one of the first courses in the field in the world, I believe. Some time ago I noticed the Summer School in Xiamen that Goertzel, De Garis & a bunch of PhDs has teached last summer. Well, I'm just one and not even a PhD student, but my course will have some significant parts that theirs lacked. Besides, it seems they have ignored Hawkins (it's not mentioned in the Program), while I will not.

More info to come.

...

В сряда около 21 студенти се записаха за моя курс по Универсален Изкуствен Разум, който ще водя от април. Първата лекция се очертава около петък 9 април.

Това ще бъде един от първите подобни курсове в света. Преди време разбрах, че миналото лято Гьорцел, Де Гарис и куп д-р-и са провели лятно училище по Универсален изкуствен интелект в Ксиамен, Китай. Е, аз съм само един, не съм дори докторант, но въпреки това моят курс съдържа теми, които те не знаят. Според програмата на лятното училище, даже са пропуснали Джеф Хокинс.

Очаквайте повече инфо.


Могат да се запишат и хора, които не са студенти въ ФМИ:
http://old.fmi-plovdiv.org/courses/link8.html

16. Универсален изкуствен разум (Практикум), хон.ас. Тодор Арнаудов

(30 места, за всички)

записване I етап: на 24.02.2010 г. (сряда) от 13:00 ч. до 14:00 ч. в 546 к. з.

Целта на курса е въведение в теориите на разума. Курсът е предназначен за студенти, които искат в бъдеще да се занимават с авангардната област на Универсалния изкуствен интелект (Силното направление на ИИ) или да разберат за най-новите теории на разума и търсенията на отговор на въпроса как да се създаде самоусъвършенстваща се разумна машина. Така систематизирано изложението на теориите на разума, заедно с необходимите основополагащи знания, е оригинално и може би е едно от първите по рода си в света, заради авангардността и интердисциплинарността на тази материя, която все още не се преподава в никой университет

Thursday, February 11, 2010

Universal Artificial Intelligence/AGI - upcoming course in Plovdiv University | Курс по Универсален изкуствен разум в ПУ


I'm writing an original course and will teach it in the third trimester, starting from April. More info to come.

Подготвям оригинален курс по универсален изкуствен разум, който ще водя през третия триместър - от април. Очаквайте повече инфо.


Wednesday, February 10, 2010

Intelligence: A Search for the biggest cumulative reward for a given period ahead, based on a given model of the rewards. Reinforcement learning.

Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence.

Part 4 of 4 - Comment #3

Part 1 (и български): Semantic analysis of a sentence. Reflections about the meaning of the meaning and the Artificial Intelligence

Part 2 (и български): Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.

Part 3 (и български): Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.

Part 4 : Intelligence: search for the biggest cumulative reward for a given period ahead, based on given model of the rewards. Reinforcement learning.

One of the milestones of my AGI research. I wrote this particular article and the comments in Bulgarian as a 19-year old freshman in Computer Science at Plovdiv University.

By Todor Arnaudov | 13 March 2004 @ 21:49 EET | 340 reads |
First published at bgit.net and the e-zine “Sacred Computer” in Bulgarian


Comment # 3 by Todor Arnaudov | 18 March 2004 @ 22:04 EET

(...)

I'm one of the "scientists" who assume that since anything in Universe does work, there is nothing mysterious in its operation. I think that every information process in Universe is formal and can be modeled by a Fon Noeman machine if it has enough memory and - because Universe is working as a computer; because the computer is what it is, perhaps because it is following a high and universal model, of the Universe. Why the essence of the computers is yet a CPU, memory, clock [time, synchron], input-otput since the start - this is the simplest complete model of operation of the Universe.

[As a University freshman it was harsh to call myself a scientist seriously, I was more a philosopher. This quotes my theory about "The Universe Computer", i.e. Digital Physics]


Everything is "formal" and "external", i.e. it could be written with formulas. E.g. why anyone assumes that another one is intelligent, if not external. And what is "internally intelligent" - one that is working like a human? What does it mean - to be built from proteins?...

Actually, a person usually says that something is "formal" if he believes that he does understand the formulas and/or if he thinks that these formulas are intelligible or superficial - just changing symbols like in Searle's Chinese room - even if he himself is incapable to understand them.

E.g. if a thinking machine is built and after it is taught [from a seed intelligence, a core, like a human] and it gets as complex as me or you, so that it would be untracable for anyone, some humans would still say that it is formal, they do understand it and it is nothing special.

Why would they think so? Because they assume that they do understand how computers work - "1 and 0, NAND, NOR; cycles, reading, writing, shifting... it's so simple and the computer doesn't realize anything of what it computes, so even if he appears to think, it would just calculate!"

Does the carbon atom "realize" that it's a part of a neuron?! Or each neuron "realizes" that it's a part of the brain, and wouldn't it behave the same way if we put it outside of the brain and feed it with the same signals like in the brain? Is the neuron aware that it's a part of a mind and does it know a part of what thought it is?

Bullshit! One thing is for sure to me - humans want to feel unintelligible, mysterious, hyper-complex, because people believe that to be "formal" or to be explainable is a bad thing. People don't want to ask hard questions, and more likely - don't want to get irritating answers. (...)

However, what is to be a human? I would ask, perhaps as a "self-humiliating" individual.
A new born is, not a 20-30 years old. A seed intelligence where mind grows and develops.

The mysterious concept of soul is an example of a helpless "prove" that human is not a "formal" entity. It has a soul, actually it is just something that a machine or anything else couldn't have just because it is a machine, that means - not a human, that means - not a member of our precious club. However those people do not question why most people believe that the animals have one, but machines - don't, while a lot of pretty formal reasons could be found.
(See theTeenage Theory of Universe and Mind/Intelligence and the short novel "The Truth" http://research.twenkid.com )

To me human is formal, and the principles in its mind as information processes are similar to what they could be in a thinking machine: memory, processor (merged in one), clock (a way to separate/resolve events), input-output. In the memory there are records of possible actions from which the person - its body - all the time selects one that mostly matches current goals for a given selected period of time ahead, based on a current active model for checking whether the goal is reached or not.

The purpose is a GOAL; a state, where the Control Units aims to. The "thirst for a purpose/meaning" is a search for local maxima, indicating to the Control Unit how close the goal is. [E.g. - an orgasm... ;) ]

The machine, as of my current understanding [2/2004] - should be made to have complex enough subunits - diverse enough in their kinds and goals, and either real or virtual - which would feel "pleasure" in a particular way, i.e. they would seek achieving their goals. And it should be impossible all subunits to reach their goal in the same time.


Terminal devices are needed (effectors) - muscules, which can't be controlled in parallel, e.g. only one subuint could control them at a given moment, so that the external behaviour, movement, output data - be clearly defined, as with human.

The subunits should compete and interact; to contract and fight; sometimes to collaborate in groups and to "fight" against other groups, aiming at the goal each of them to reach maximum pleasure.



I think that's what human does and I think [as of 2/2004] this is "mind" [intelligence]: search for pleasure and avoidance of displeasure by complex enough entities for a given period ahead [prediction, planning]; the "enough-ness"" is defined by the examples we already call "intelligent": humans at different ages, with different IQ, different characters and ambitions.

The behaviour of each of them could be modeled as a search for the biggest sum of pleasure (displeasure is a negative in the sum) for a selected period of time ahead, which is also chosen at the moment of the computation.

"Happiness" is another word for pleasure, and as a friend of mine, a poet, once said:
"The man is unresting happiness seeker. I don't know why it'so important for everybody to take their dose of happiness, but it is true... and the people, caught in the hands of vices and addictions are a kind of deluded seekers... They are also searching for happiness, but at a wrong place".

Addictions - such as to drugs, but not only - are an example of Control Units finding their goal - an enourmous pleasure they feel, taking a given drug - and this can lead to taking control over the whole body. "Happiness" for the body is turning to an elementary happiness for the elementary control units, which feel that their goal is reached by detecting a particular kind of molecules.

PS. When I was about to write "a local maximum" I was about to say also "for a close enough neighbourhood" (very short period), influenced by the Mathematical Analysis I've been studying lately, however mind is much more complex than the Analysis, i.e. it's based on much more linearly independent premises, that is impossible to be deduced from each other.

It's not required the period for the prediction to be short enough, because like in the example with the chocolate, caries and the dentist, immediate actions can be included in functions that give result shifter far ahead in the future; at the same time, the same action can be regarded in the immediate future, in the example situation, the eater will feel ultimate pleasure in the following one second.

One amongst many functions can be chosen or construted, and they all can give both negative or positive results, depending on minor apparently random details - the mood of the mind in the particular moment; which variables are at hand, which the mind has recalled of, what the mind is thinking of right then...

(...)

Suggested reading - "The Truth", a short novel I wrote in 2002. [not translated as of 10/2/2010]



Comments from 10/2/2010:

Yes, multi-agent stuff reminds Minsky's "Society of Mind", but I hadn't read it then and still have not.

The drug stuff is related to dopamine, endorfine and other neurotransmitters, "Reward pathway" ans so on. The discussion is about "virtual" units, Control units , not about neurons etc., though.

Also, the last discussion is about Reinforcement learning bug, I didn't know the word "reinforcement learning" back then and didn't know what exactly behaviorism was, I knew about utilitarism, though, and was using my imagination to find explanations.

A version of reinforcement learning was reinvented in my early
works from my Teenage Theory of Universe and Mind, where the Control Unit (agent/human/society/state/system) is predicting and aiming at scenarios which would lead to the biggest cummulative pleasure for a given period ahead.

In society and states, most of the laws have this function, and it's related to another concept - the limited level of pleasure (reward in standard RL terms) that any control unit/subunit/agent can get in any possible circumsances. This is supposed to prevent one from gaining too much control over the system - mind/society.
I gave the following example: when somebody robs a bank, he gets rich, can do better what he wants, but his happiness is not unlimited, it's normalized to 1 anyway. On the other hand, he is causing lasting displeausure to many people and the sum is highly negative. The system is aiming to avoid robberies. This is related also to doing what one wants and what doesn't want (another important concept in my theory about Control Units). Control Units are aiming to control, and loosing control and predicatability causes displeasure/confusion.

Recently I found that Marcus Hutter and Shane Legg talk about a concept which is smiliar to my limited reward "Assuming that environments return bounded sum rewards ... " in "Universal Intelligence:A Definition of Machine Intelligence". Here. They also define the Universally intelligent agent as a seeker of a sum of rewards. However, his papers are published years later (or around, I haven't checked all), and I didn't know about him at the time, I heard of him in 2009.

In 2002-2004 when I was bulding the foundations of my Universal AI theories and understanding, I was too young, busy and "ignorant" - I didn't know about any of the gurus such as Goertzel, Schmidhuber, Hutter, de Garis. Hawkins has just appeared with his "On Intelligence" right after I published the last 4-th part of my first works, which was written
mostly in 2003.

I wrote my stuff on my own
and I still believe that "Imagination is more important than knowledge"; can't be so ignorant as back then, though.

I wasn't sure whether I'm not a mad man at the time, because it's hard to find anybody to understand and appreciate such advanced stuff as AGI, I was 18-19 years old. Years later, starting from 2007, I started to find that my ideas, pieces of them or directions are shared by gurus in AGI - first Hawkins in his Memory-prediction framework and "On Intelligence", then Boris Kazachenko, Jurgen Schmidhuber, Marcus Hutter.

Of course, this "Teenage Theory of Universe and Mind" is something that will be translated and shared as well.


Keywords: Reinforcement learning, multi-agent systems, control units, pleasure seeking, utilitarism, local maximum, Todor Arnaudov's Teenage Theory of Universe and Mind, Todor Arnaudov's Teenage Theory of Universe and Intelligence, limited reward, limited pleasure, bounded reward, bounded pleasure, addiction, addiction bug of reinforcement learning, cumulative reward, cumulative pleasure, pleasure seeking, seeker, searcher, intelligent agents, Twenkid Research

http://research.twenkid.com


Tuesday, February 9, 2010

Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.

Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence.

Part 3 of 4 - Comment #2 continues...

Part 1 (и български): http://artificial-mind.blogspot.com/2010/01/semantic-analysis-of-sentence.html

Part 2 (и български): http://artificial-mind.blogspot.com/2010/02/causes-and-reasons-for-any-particular.html

One of the milestones of my AGI research. I wrote this particular article and the comments in Bulgarian as a 19-year old freshman in Computer Science at Plovdiv University.

By Todor Arnaudov | 13 March 2004 @ 21:49 EET | 340 reads |
First published at bgit.net and the e-zine “Sacred Computer” in Bulgarian


Comment # 2 by Todor Arnaudov | 18 March 2004 @ 21:58 EET |CONTINUES...


Let's note, that when a mind is being evaluated, a "stop frame" is evaluated. In this very moment. (...) And somebody with a particular attitude is the one who can find best why he does like or dislike a particular thing.

E.g. only you know, you see or imagine on what exact circumstances you're talking about. Every detail in a given situation is important, not only the generalized ones, said in a few sentences. E.g. on the wall that you imagine, there could be other particular objects which don't match this one. And this is your taste - your like or dislike.

The purpose of this "will", this kind of will - like/dislike something - is a search for a reason to decide and choose, when a single and unambiguous action should be executed. If that action doesn't lead to a damage (you won't be hurt either if that clock with a termo is put on the wall or not) - then there is not an immediate practical consequence/meaning what we choose. For a long period ahead, we cannot predict how exactly this particular action and decision will affect us, because the future inputs are too much and too much unknown in advance.

So, let's assume that it doesn't matter the clock & termo combo is on the wall or not, and the reason is that "we like it" (anyhow) and it is possible to find a persuasive reasonable explanation, if it is possible to analyze ourself precisely enough.

It is possible that:

1. You already had a clock and a termo (separate) and you're a practical person who doesn't like possessing redundant stuff.
2. You don't need a termo anymore. - This is a preliminary calculated reason; once you have told to yourself "I don't need a termo anymore". Then you have never questionned this reason and you have followed it as a reason not to put a termo on the wall)
3. You're in a bad mood and you would deny anything that anyone would ask you to do.
4. You don't like anybody to tell you to do whatever, and you feel like they are giving you a command (e.g. it's a gift from your mother in law)
5. You just don't know why.


In general, of course - if we're searching for a reason, it's nice to have as complete model of the evaluator as possible...

Definition of a "meaning" with the meaning of a goal/purpose or cause/reason and "not contradictory link" expresses the attitude of the evaluator to the item that is being evaluated (where a cause/reason/purpose/meaning is searched).

And it is very important to note, that the evaluator determines whether there is or there is not a cause/reason/meaning/purpose for him personally. "Meaning" is subjective.

Actions of thinking machines and persons can't be explained unambiguously from an external evaluator, because externally the information is very scarce [and intelligent agents behaviour is very complex and non-linear]. The amount of information which is transferred between the parts/modules of the machine/human is enourmously bigger than the data which is outputted - e.g. the externally visible behaviour.

Besides, the mind is biased and limits itself when searches. To me, the ultimate cause for anything is the whole past, the every single little difference would make the whole different. However, since it is impossible for the mind to compute all causes, mind uses greedy algorithms and searches for most direct and "plausible" explanations/causes/scenarios.

It is like differentiating in mathematics - there is raw data, a graph of values; the function that has drawn the function is searched. However, this is an ambiguous operation, we can guess, but not always know - the causes are also ambiguous.

That's why any "differentiated" causes are meaningful at a given deepness, resolution or so, where the search is interrupted.


Now, let's check Konstantin's opinion once more, differently:

Konstantin: People do absurd things, which however appear to be full of deep meaning. Programmers sing - out of tune... A banker who owns millions and visits luxury restaurants once passes next to an old lady who is selling donuts; then he takes one from a dirty bag and buys a donut for 30 cents. No doubt this donut was made by a poor snotty baker, but... For the banker, this is the best food and the best thing in the world!

Where's the absurd in programmer singing? "A programmer" means a person, and a person usually can sing - good or bad.
What you express here is just your opinion, your disaproval.

There is not a reason for a bad singer to sing? Why? Because he's afraid of being accused for singing, mocked... (...)

This particular programmer may be singing because:
- He feels in love and felt happier than the moment before, and singing is a way to express your good mood.
- He is alone, and he wanted to sing before, but he was shy.
- He is drunk and his inhibition was taken away.
- He is in a karaoke bar, lonely, he saw a beatiful girl and he decided that this is a meaningful way to attract her attention. He might be drunk or not.

(...)

It is impossible to embrace all possible reasons, because of the combinatorial explosion, but when having rich enough information about the circumstances, it is always possible to find plausible concrete reasons, if one wants to find. (The one who denies, usually doesn't want to find reasons.)

Konstantin: For him, this is an act that is filled with deep meaning, but how would you persuade a computer program? Especially if the program is counting the number of viruses and bacterias that has entered the banker's organism in that very moment. How does the banker would explain it - “I felt a thrill, I remembered once when I was a child...”. From the viewpoint of the computer, this is a non-sense, just a random association – especially if from this little moment eventually grow up a serious decision for his life and career; how would you explain to a computer what's in common between the donuts and the money?

What does a "program" means? The implied in your words is "a dumb program" or one that would react as it doesn't understand. However, then obviously this is not a thinking machine!

I think the reason is actually very purposeful and meaningful. Every Control Unit has goals: the man has recalled an event that has made him feel good and he wanted to feel that pleasure again. He didn't think of the microbes, he didn't include then in the evaluation function, and he has never counted the microbes. If a machine is searching for a reason for a human, it must put itself in his shoes and evaluate as it is him, not as it is a counting machine. (As I noted before, at least according to my research, "pleasure" means achieving the goal of the behaviour of the Control Unit.

The banker has recalled a reachable state that in the past has made him feel pleasure.
That state is a set of circumstances, feelings, possibilities for actions; possible ways for changes in the circumstances/percepstions/feelings.

In that very moment there weren't any other possibilities that would give him higher pleasure in the next, say, 5 seconds. After he has bitten the donut, the search for another way to feel pleasure is cut. The donut became a "master" of the mind and rules the peron's current goals and behaviour. [This local reward] rules the hands, the jaw etc. in order the person to feel the taste that has made him feel the pleasure in the past, and that pleasure to come back and be felt again.

How does this temporal control over the mind and the effectors (muscles) happens?

The behaviour of human can be represented as a complex of greedy algorithms which are searching for states, local maxima - the biggest possible pleasure, and the minimum possible displeasure.

Any choosen action is reasonable, for the virtual greedy algorithm that has ruled over the rest in the given situation. (...)

E.g. if one starts to eat some food and it's tasty, he doesn't spit it after the first bite, in order to search for something which is tastier, even if he knows that such food does exist and is near-by - in the fridge; the first bite may rule us for a moment and inhibits the urge for another piece of tasty food.

However, what if in the evaluation of the pleasure the mind includes "the fear of caries"? It is so complex mixture in the mind, what exactly would "rule" depends on specific memories, specific stimuli in the recent past and it may appear random.

In case of caries being included in the evaluation, it might be negative - the chocolate is not the maximum pleasure and shouldn't be taken. Different Control Units are working in parallel, all fighting for control over the effectors. And if one which has this fear rules out, it may stop the eating operation and switch to "brush my teeth immediately".

Before the caries consideration, the greedy algorithm has computed the biggest cummulative pleasure/reward in the near 1 second. However, the caries and the pain at the dentist forced him to look feelings long ahead, which are assumed to be caused by the teeth and chocolate. This assumption is important - something else may actually cause it, but the person takes this as a reason/cause!

The dental pain is much bigger punishment than not feeling neither pain, nor pleasure (not eating the chocolate), so it is avoided.

Turning back to the donut - this "random" link/memory/recall is not random at all!

The apparent reason that recalled the memory are the images of donuts, their smell. Also, the circumstance that the banker has been walking alone and he was thinking of something; not long ago he has met his grandparents in his village; when he was young he loved donuts...

All these details has made him want to feel this pleasure again right then.

Let's analyze the donut even once more:

Konstantin: People do absurd things, which however appear to be full of deep meaning. Programmers sing - out of tune... A banker who owns millions and visits luxury restaurants once passes next to an old lady who is selling donuts; then he takes one from a dirty bag and buys a donut for 30 cents. No doubt this donut was made by a poor snotty baker, but... For the banker, this is the best food and the best thing in the world!

The banker may possess milions, but while he's walking by the old lady on the streets, those millions are worthless. When one is walking on the street, he is supposed to follow the stimuli around him - reading the captions, watching the cars, traffic lights, passers-by. There are not luxury restaurants at every single corner, and you can't purchase a ferarri or an airplain right there.

The action of the banker just seems "meaningless", i.e. inappropriate, impossible to explain, because it was assumed that if one agent is a banker, then he should do this and that, and never does this-and-that. This is another example of artificial self-pruning of the search of reason/purpose, without explaining why and without an explicit cause, besides the prejudice.

Every human being, even every humanoid robot can put his hand in a bag, catch the donut, pay and is capable to inform the others that "this is the great thing I've ever did in my life".

There is always a possible reason to do it, if this is the best/most rewarding action that the agent has found in the current situation/planning period.


CONTINUES with part 4/4...

More keywords: Universal AI, Artificial General Intelligence (AGI), Behaviorism, Psychology, Control Unit

Monday, February 8, 2010

Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.

Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence.

Part 2 of 4 - Comment #1 and a part of #2 of:

Part 1 (и български): http://artificial-mind.blogspot.com/2010/01/semantic-analysis-of-sentence.html

One of the milestones of my AGI research. I wrote this particular article and the comments in Bulgarian as a 19-year old freshman in Computer Science at Plovdiv University.

By Todor Arnaudov | 13 March 2004 @ 21:49 EET | 340 reads |
First published at bgit.net and the e-zine “Sacred Computer” in Bulgarian


Comment #1 by Konstantin Spirov | 15 March 2004 @ 20:25 | EET


(…) I'm a classical programmer and I haven't really dealt with AI. However I reflected about how I would define “meaning”.

To me, the thirst and urge for finding a meaning does not prune contradictions, in contrary – it's searching for the cause, for the prime mover, the initial force. This is not related to contradictions.

For example, a tawdry clock with a thermometer is not incompatible, contradictory – it can hang on the wall, and neither the clock, nor the thermometer disturbs the other. However, to me this object is pointless, meaningless – you can't tell me any reason to put it on the wall.

On the other hand, the opposite phenomenon happens every day. People do absurd things, which however appear to be full of deep meaning. Programmers sing - out of tune... A banker who owns millions and visits luxury restaurants once passes next to an old lady who is selling donuts; then he takes one from a dirty bag and buys a donut for 30 cents. No doubt this donut was made by a poor snotty baker, but... For the banker, this is the best food and the best thing in the world!


There we are! For him, this is an act that is filled with deep meaning, but how would you persuade a computer program? Especially if the program is counting the number of viruses and bacterias that has entered the banker's organism in that very moment. How does the banker would explain it - “I felt a thrill, I remembered once when I was a child...”. From the viewpoint of the computer, this is a non-sense, just a random association – especially if from this little moment eventually grow up a serious decision for his life and career; how would you explain to a computer what's in common between the donuts and the money?

As we know, AI has many directions – the most of the researchers belong to the “weak” one, that is – not trying to model an AI, but just aiming to make the behavior of the computers to appear human, in order to make the life of the users easier – nothing more. These researchers discuss like this – the problem is complex and often vague, but we know tricks that would help us to cope it very well – we just have to spend some time and be more …. There are also researchers from the Strong direction, like the respectable Marvin Minskyor the clown prof. Kevin Warwick (sorry if you like him), who are aiming at goals which are much more interesting for the media. Some of them probably do, because of problems with funding, others do really believe, because this sounds more heroically.

I personally support the Weak AI and I think that the questions posed in this article very precisely describe the reasons – you can model non-contradictory system, learning and even something that looks like freedom (at least external unpredictability), but I cannot imagine how the thirst for a meaning can ever be modeled.

Scientists can invent any formal systems, to analyze and replay words, but we cannot give them a meaning (“
да ги оглосим“ - an ancient Bulgarian word used). The meaning is a deeper concept than us, it is discussed by the ancient Greek philosophers. The whole human civilization from all the times deals with the meaning. (…) In the bible “слово“ (logos) means also meaning, cause. … “Logos- the First Cause, which can't be understood or explained (…other stuff about another explanation of a cat and an uphill about a real cat that has gone wild; usage of “to drink” in a metaphorical sense (to drink a stone with a gaze), hard to translate an not relevant:

P.S. за "котката и нанагорнището" ми хареса. Замислих се, че мога да дам неизчанчено обяснение за него. А компютърът - не.

Та сетих се, че за мен "котката изпи камъка и литна под нанагорнището" си е съвсем смислено изречение. Всеки, който си има котарак знае, че те са загадъчни същества и имат странни способности. Освен това мога да докажа, че съм виждал веднъж как котката пие камъка и лети под нанагорнището. Беше миналото лято на Варна. Тъст ми и тъщата ми си имат виличка на 30 километра от града, в една сушава местност с изглед към Варненското езеро. Пълен пущинак. Вилата име с картонени стени и се състои от две стаи.

Участникът в действието не е "котката", а една конкретна котка - котаракът Марти. Всяка дама би казала, че е и мил пухльо, http://polly-and-kosio.net/_predi/pages/bulkata_s_mama_i_tatko_jpg.htm, и в същото време, когато попадне в своята среда - той се превръща щастлив и див кръвопиец. До вилата има едно дере - когато Марти отиде на Варна, в него се събужда Хищника. Веднага избягва в дерето и по цяла седмица оттам не се чува нищо друго освен воя на вълците, лаенето на кучетата и крясъка на птиците. Точно, когато "родителите му" (тъст ми и тъщата ми) са изгубили всяка надежда, че ще го видят отново, той се завръща горд, с чувството на победител, ветеран, преживял своята неразбираема война.

Та изречението: "котката изпи камъка и литна под нанагорнището" много точно описва, какво се случи последния път, преди да изчезне в дерето. Зад камъка стоеше Иванчо, моят син, който току що беше проходил. Откакто "навлекът" бе там, Марти не получаваше достатъчно внимание. Преди да избяха на хълма и след това да литне под нанагорнището (към дерето), Марти за последен път изпи със завистлив поглед камъка, зад който се криеше Иванчо.

Regards, Kosio

#########

Comment # 2 by Todor Arnaudov | 18 March 2004 @ 21:58 EET | 0

Control Units, Causes, Goals, Achieving goals of a Control Unit == Pleasure

“Reasonable behaviour” - a search for local maxima of alterating and changing functions of expected pleasure and many more


[In brief, the concept of "Control unit" or CU means something like a causal force, it's more complex, but explanation is in the bigger theory of Universe and Mind to be translated.]

Thanks for the opinion and for the opportunity to post some more reflections on the topic... (...)

I agree that “meaning” has other meanings in different contexts and circumstances. E.g. “a purpose”, “goal”. “There is no purpose” means that I don't have a reason to do it, there is nothing that I want to achieve, linked to this “item”. With “reason” or “cause” - you cannot find a cause that could cause you to do this particular thing.

However, the first, primary causes are something with many features, as well. E.g.:

I feel thirst, I want to drink some water. Then, my goal becomes “to satisfy” my thirst. Then I start to search for means to achieve this goal in the possibly closest spatio-temporal area around me. I found this location is the sink, which is a few seconds away. I'm moving my chair a bit, get up, walk, open the door, pass through the corridor, open another door, turn around, take a cup, put the cup under the fountain, catch the tap for the cold water with my right hand, turn it; water spills; the cup gets filled, I turn the tap back; bend my hand back; prepare my mouth to drink; bend the cup, spill its content in my mouth; swallow....

Ready! The thirst was satisfied...

Machine: Why did he drink some water?

Human: Why, really I did?
Inpatient one: Because he was thirsty... It's obvious!

Machine: I don't think so. Why not the cause to be that because one or another Nuclear World power DID NOT sent Hell just a moment before, so that he couldn't drink? Or why because there WAS NOT an earthquake etc. [Right, this is also because of Occam's Razor, complex examples – 2010 addition, but the point is that there are zillions of possible reasons and causes, and we're pruning them because we're searching for simple explanations.] There are simpler meaningful possibilities – if on the table next to him there was a bottle of a juice or another soft drink, he might haven't drunk water, but this beverage. In the situation's definition, it was not told that there was not such a bottle. This is an assumption, you don't even think that it is possible.


So it is possible that there was a bottle of soft drink, a Coke, and he has realized in this very moment, that those kind of soft drinks cause bad teeth and he has recalled the visit to his dentist. This is not mentioned in text, but it is not denied either.

And why not say that he drank water, because:
- He is a human? Or because...
- He is a living being? Or because
- It was hot? Or because...
- There was a schedule of the water supply, and in this very moment there was water in the tubes. Or.. because
not him, but his throat was dried then. !!!! пресъхнало

Or because last day he did forget to fill the bottle that he keeps next to the computer.
And the reason, the cause not to fill the bottle was that previous night he was too much into commenting in an Internet forum. The reason to be so concentrated in this forum was that...

And so on... You, poor humans, there are endless number of possible reasons, not just those simple one, that you short-sighted humans see next to your nose.

Impatient one 2: Shut up, you stupid piece of metal. What the heck you know? You're a machine, machines can't think... Everything is formalized in you, so you're stupid. And there was a sentence... the more you know, the less you know.... Errr..

Impatiant one 3: He drank water, because he was thirsty. It's so simple!

Human: It seems so, because when one is searching for reasons, for causes, he is limiting the space of the search the way that fits his own desires in the particular case. If one has read “He wanted to drink some water”, the first and easy plausible explanation we see is written in direct words. Immediately, while one is creating the virtual world of this situation, he is setting precomputed, biased reasons, based on his initial impression. Then, when one seems to search for the causes, he finds them immediately – nice and easy, there are right there in the root of the search tree..

Impatient one 2: Did you believe this machine? You shouldn't! Never be persuaded by a computer, no matter as smart it seems to discuss. No matter how it appears to be, it cannot think, because it doesn't have a soul. I don't know what exactly a soul is, but .. Blah-blah...

Human: But you cannot drink water, if there is no a sink and a tap. The reason and cause he drank water was both his desire and the existence of a mean, a source to achieve its goal!

Machine: Good. And why did he bend your arm before he drank?

Impatient one: Why... In order to drink! You are so dumb!

Machine: But the arm doesn't know what is “to drink”. It is just a lever. He bended his arm, because his brain instructed the arm to bend. But this instruction was sent, because the human decided to drink, without bending the whole body and drink directly from the water jet. And there could be a reason not to do it, because he has a trauma in his back.... Why did he have the trauma? Because a few days ago he attempted to lift a too heavy weight; he did, because he wanted to practice, because he saw his girlfriend looking too much strong men's asses. Therefore, if we stop the search for a cause right here, then the reason, the cause that the man has bended his arm was his girlfriend, and her looks to the strong men's asses. However, why not saying that those strong men are the reason? The woman wouldn't look them, if they didn't exist. Or they muscles? The mere existence of muscles. Or these specific circumstances – in a specific moment they met strong men in the park, the woman looked at their asses, the man was jealous, he tried to lift heavy weights in order to practice, then this caused a trauma to his back, then he wanted to drink some water, and he used to bend his back and drink without a cup, but this time he couldn't, so that's why he bended his arm...

The conclusion is that there are many and many possible reasons and causes that are actually true in the same time, because the reason and cause for every event could be assumed to be everything that has happened, and depends on the moment we decide to stop and simplify. In the total, common cause there is no meaning – there is no specific reason/purpose/meaning/cause. The intelligent beings make sense and choose causes/reasons/purposes/meanings, based on their knowledge (and aim; knowledge = their biases/structure/configuration/state/development...).

(The intelligent beings are fitting reality to their virtual reality - fitting the laws in their virtual worlds to the laws they assume to be laws to the real world, comment from 2010).

Impatient one: The question was, what the brain has instructed the arm...

Patient one: Why what the brain has instructed? The cause the arm bended was that the muscles bended, then they pulled the bones, which are supporting the soft tissue of the arm and the hand, which is holding the cup.

Machine: Human call “reasons/causes/purposes” those ones that he himself, in particular, has accepted to call “causes”. In this specific case – the first items that appear in his mind. The first items that appear to a mind are his thoughts and feelings, linked to a particular event he recalls. And if a plausible enough cause/reason/purpose is found (usually “enough” is really a small ammount), the search is concluded and the searcher doesn't ask for more.

Patient: I see, but I'm tired already...

Konstantin: (…) There we are! For him, this is an act that is filled with deep meaning, but how would you persuade a computer program? Especially if the program is counting the number of viruses and bacterias that has entered the banker's organism in that very moment.

The persuasion depends on the both sides. If nobody can persuade you to do something means this only this, not that a reason/cause/purpose to do what he want you to do does not exist at all. If a man wants to do something, the most frequent reason he finds is... because he wants! Usually one doesn't know why he wants exactly this, or if he knows a little, he can “prove” it with explanations like “I want it, because I like to do it!”, “I enjoy it” or so. If, for some reason, one has to explain it in a more persuasive way, usually one searches for a plausible explanation why would someone want to do it and why would do it. If one doesn't want to do something, he says “I don't like to do something” - and always can find a reason why he don't want to do it.


If this imaginary program is intelligent, it could easily find many explanations.

Imaginary_AI: What a dummy program would deny that the act of the banker was meaningless? This banker, this is the richest man in Bulgaria and according to statistics, a few months later he has considerably enlarged his wealth. That means, he has reached a higher local maximum of his wealth (see below). Therefore, according to the behavioral model of his virtual control unit (see below) and the statistics, he has selected reasonable/meaningful actions and has taken good decisions in his spatio-eventual(based on events)-temporal region. His actions and decisions has led him to his the goal, which easily can be implied as “possessing more money”.

Imaginary_AI: There is no reason not to accept that buying the donut was not a part of his strategy to reach the general goal “being wealthier” (for example, it made him feel good, you give yourself such an explanation), because all and every actions and events, happening to a person, are linked and related to the way he thinks/reasons and to his following actions and decisions. All actions, done with a desire of the virtual control unit itself, intentionally and not forced - are meaningful and reasonable to itself. That means they are target actions, goals. Usually such target actions are caused by specific desires, initiated by a search of local maxima or high plateaus [of a reward]. My living friend will explain you this stuff below. [Reinforcement learning.]

Konstantin: To me, the thirst and urge for finding a meaning does not prune contradictions, in contrary – it's searching for the cause, for the prime mover, the initial force. This is not related to contradictions. For example, a tawdry clock with a thermometer is not incompatible, contradictory – it can hang on the wall, and neither the clock, nor the thermometer disturbs the other. However, to me this object is pointless, meaningless – you can't tell me any reason to put it on the wall.

It is possible that no one can tell you a reason to put in on the wall, because you think that it is tawdry and apparently this is "bad" and undesirable to you. Indeed - contradictions have to be searched and checked in the whole memory - of the evaluating unit (the agent, the human) together with the environment. [All history and possible relations.]

For example, one can find bad memories, related to such objects. [Which one does not realize, but they are fixed in the patterns of his mind.]

Konstantin: On the other hand, the opposite phenomenon happens every day. People do absurd things, which however appear to be full of deep meaning. Programmers sing - out of tune... A banker who owns millions and visits luxury restaurants once passes next to an old lady who is selling donuts; then he takes one from a dirty bag and buys a donut for 30 cents. No doubt this donut was made by a poor snotty baker, but... For the banker, this is the best food and the best thing in the world!

I don't think that any action or decision of any control unit is actually "absurd".

An evaluator calls it "absurd" when he or it doesn't really know the model of control unit's behaviour, or when the evaluator assumes that it knows how the evaluated unit/being "should behave" in a given situation.

However, if something unexpected or thought to be absurd/impossible has happened, it is apparent not that it is "absurd", but that the evaluator was WRONG. Either his model is wrong/not precise enough/confused or it could be precise, but it lacks the data required to make correct, complete and precise predictions.

If any Control Unit does anything, it has a particular meaning/reason/cause/goal behind it, even if it can be vague or unintelligible for an external evaluator.

The meaning/reason/cause is specific, it belongs to a particular working Virtual Control Unit. It is not a generalization, it is not a set of rules, written in a textbook. This is a specific model of something, that runs somewhere

The meaning of "meaning" right here is a GOAL. Any action of a Control Unit (CU), done because of instruction given by itself alone (and not forced by external CU, e.g. moving a hand with a wire) is tautologically target action for this Control Unit, and it displays the urge of the CU to achieve the "purpose/meaning/reason" of it to exist, according to its own understanding about what its goal is in the moment of decision. [Here "its own understanding includes also implied in the specific construction/architecture/the way the device/being works]

See my Teenage Theory of Mind and Universe for more. To be translated and published... http://research.twenkid.com

The purpose - the GOAL - especially for the compound CU is changeable and the more complete information about the exact event and circumstances we have, the more precise that GOAL could be guessed by an external evaluator.

A human are individuals in the sense that its indivisible - indivisible is what he understands "he is", but even theoretically he is incapable to know or understand exactly why it does what it does with the maximum possible resolution of control.

I think human [can be modeled as it...] is a complex of Control Units (virtual computers, simulators), where each of them is aiming at completing at maximum precision its program, the purpose of its existence (it's implied by its architecture and operation).

An indication of reaching to a goal of behaviour - finding of an optimum in the learning function - is the feeling of pleasure.

When a Control Unit detects that it has reached the goal, it “feels pleasure” and aims to fly about this part of the graph of its [reward] function.

Therefore I believe that human mind can be built as a mixture, a system of multilayer [hierarchical] control units, where each CU at a higher level controls with a lower resolution that the one below. The higher level control unit controls more imprecisely than the lower one.

For example, the top level of control sends a command with a length of 16 bits, while the description of the precise action to be done requires 128 bits or even... 2^128 bits. The details, the rest of those bits - 112 or 2^128 - 16 are actually completed by the "controlled" unit.
(It only seems that it's controlled, because the action is more dependent on its operation that on the operation of the top unit. [When the evaluations is done, evaluator probably usually starts from the top level, giving it “will”, initial cause... - comment from 2010]

(See "Abstract theory of the exceptions of the rules in computers", to be translated and published... http://research.twenkid.com )

A more specific example:

They say, that we can consciously control the moves of our fingers. Therefore our mind, our conscious can cause the finger to move...

Really?

We are free in the sense that when we think that if we want "I'll bend my finger right now!", then the miracle happens - exactly that finger bends. It seems that it moves, because of our free will. (This can be interpreted also as a coincidence, a match, and not a real control (causal relation), in terms of other articles from my theory from the time – comment from 2010).

This is power and control. However, in order the finger to bend in the reality and not just one to notice it in his mind, an enormous amount of information needs to be sent somewhere.

Not just selection of a finger (say 20 or 30 bits) and a definition of the a momentum, how strongly to bend the finger or so - this is just a virtual definition of a finger in our minds!

In the reality, the information that needs to be "entered" in order to execute that simple action includes the exact description of the precise movements and changes in every single particle that builds the finger, with the maximum possible resolution of the Universe.

Every single particle has a particular acceleration and it is in a particular place. Mind doesn't posses all that information and it can't, because it doesn't control in the strong sense of the world.

"Control" in its strong sense means with the highest possible resolution of control. [In the given environment/world/virtual world].

So measured as an amount of information, the cause of the movement of the finger is contained more in the finger itself than in the mind, the apparent control unit, because the description of the finger and the muscles that are acting on it is much longer than the description of the simple abstract instruction that we can realize and control [consciously].

Each CU assumes that it is the cause for the events to happen, that it is "free" and does what it wants, because each CU is similar to the only really free Control Unit - the whole Universe; it includes all details together. The whole Universe controls in the strong sense - what it "wants" happens, because it defines what is possible or not, and executes what's supposed to.

However, not every CU is complex enough in order to declare "I do control". Human mind is complex enough to do it, but actually the body is what controls it, not the reverse. The conscious can embrace only a part of the causes for its own existence, and no matter how hard it is searching for the deep ultimate causes of its own actions and decisions, it can't reach to them.

This is the free will, the freedom. The Control Unit (a human mind) cannot find the causes and reasons for its behaviour in the way [the precision, the domain ...] that it assumes that it should, if they had existed, and that's how the control unit proves that its own behaviour is free and unpredictable not only for an external evaluator, but universally.

Also, this is a convenient conclusion, when the CU intentionally aims not to find proves for the predictability, because one of the major goals of every CU is to feel as a MASTER. No matter how simple or complex (built by many simpler) the CU is. CU aims to feel as a MASTER and not to put this in doubt. CU are similar to Universe and they aim to be like it.

TO BE CONTINUED.... with part 3/4

http://research.twenkid.com
http://artificial-mind.blogspot.com
http://eim.hit.bg/razum (Bulgarian)

Other keywords: Universal AI, Twenkid Research