Monday, February 20, 2012

// // Leave a Comment

Philosophical and Interdisciplinary Discussion on General Intelligence, AGI and Superintelligence Safety and Human Moral | Cognitive Origins of the Concepts of Human Soul and its Immortality | Free Will and How it Originates Cognitively | Animate Being and Soul and the Cognitive Reason for the Believe that "Thinking Machines can't have a Soul and Consciousness" | Technology Making us more Humane | The Egoism of Humanity | And more

By Todor "Twenkid/Tosh" Arnaudov
http://research.twenkid.com/wp/

For the followers of the blog - that's not the paper announced in the previous post.


The expression and sharing of the following reflections and speculations is inspired by a publication for a position offered at the FACULTY OF PHILOSOPHY, Oxford University, published on the AGI List (thanks to Sean O'Heigeartaigh).

http://www.fhi.ox.ac.uk/get_involved/future_tech_vacancies/futuretech

See the pdf document, the research foci on p. 2, section 5:

http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0014/25034/Futuretech_Tamas_RF_230112.pdf 

5. This is a research position focused on topics related to the long-term future of machine intelligence and AI. Relevant research foci include: 
- Studying paths towards strong AI: Monitor the current state of progress in the field; identify milestones, roadmaps, etc. 
- Machine intelligence: Analyze fundamental concepts—e.g. how to define and measure general intelligence in artificial systems; how to distinguish different kinds of goal-seeking systems.
- Self-modifying systems: What can be proved about the performance and capabilities of different kinds of recursively self-modifying programs? Can a framework be developed in which demonstrably safe, recursively self-improving AIs could be constructed, with stable and human-friendly goal systems? 
- The role of big data: What impacts and applications will arise from the increasing availability of enormous data sets? What are the fundamental tradeoffs between processing power and data? Is there a minimum amount of data that an arbitrarily powerful intelligence would need in order to effectively deal with various task domains? Can one quantify how much information the human brain contains at birth to enable it to develop general intelligence? Can one quantify how “difficult” it was for such a system to evolve? 
- Philosophy of computer science and of AI: What does it mean to implement a computation? Would a “Boltzmann Brain” or a “Swampman” be conscious? 
- The control problem: How could one ensure that an artificial superintelligence would be safe and beneficial? 
- Can one analyze classes of possible utility functions and construct general statements about the behavior of expected utility-maximizing superintelligence based upon utility functions of a particular class?

Sounds familiar - I have already worked on and have been studied the topics listed in section 5 in the pdf document since the "teenage theory" era, my ultimate aim is at understanding general intelligence and building an eventually developing an AGI system, thus my intent when working on related philosophical issues is to put the  speculations and explanations in coherent computational/theoretical form aligned to a theory of GI to the general/cybernetic/systems theory trends in Universe evolution as I see them (a sort of cybernetic metaphysics).

I also aim to match my theoretical speculations and insights with all accessible kinds of neuroscientific, evolutionary, cognitive science, behavioral etc. evidences I'm aware of at the time. Often I discover such evidences later. For example ~10 years ago, while not being familiar with neuroscience as now, appropriate reflections and a basic understanding in developmental psychology allowed me to induce the existence and functional distinction of hippocampus and the neocortex using only behavioral evidences. See: http://artificial-mind.blogspot.com/2010/06/teenage-theory-of-mind-and-universe.html
Teenage Theory of Mind and Universe (Excerpts, Part 1) - Theoretical Induction of Neocortex and Hippocampus using Consciousness; Compression 

Regarding the philosophical problems, some of my oldest work include analyses of questions, for example related to:
-- What is a "soul" really and why people believe they have such a thing and are more likely to intuitively believe higher animals have and machine shouldn't have?
-- Why a mind believes the soul is immortal and should exist forever?
-- How death is actually perceived from a thinking machine and human perspective and what humans are really afraid of?
-- What actually humans (intuitively) mean with "free will" and why humans insist they have and machines would not?
-- What actually the concepts of hell and heaven illustrate about human mind operation, why so many religions have such concepts, how a cognitive system invents them, why it's so strong?
-- Discussions about consciousness, what is it from different perspectives.
-- General metaphysical universe principles and trends displayed in society, computers, intelligence
-- Core human drives, such as egoism vs altruism, and why love is socially/pragmatically praised?
-- What is creativity, what is to be original, why humans believe computers can't be creative?
-- What is actually implied by "rational". Can an intentional agent (and human) be really "irrational" or rather the model of the agent, used by the evaluator of rationality, is too coarse grained and wrong?


Regarding those sample directions in section 5., I'll use the opportunity to give some contrary statements, from the point of view of the thinking machines, that would be smart enough to unveil our biases and self-praising believes about humans "undisputed" moral superiority.


Oxford: Machine intelligence: Analyze fundamental concepts—e.g. how to define and measure general intelligence in artificial systems; how to distinguish different kinds of goal-seeking systems.


  • 1. Hutter and Legg's Definition of Machine Intelligence, and the Educational Test

There is serious work already done, see for example M. Hutter and Shane Legg's  paper below. These are slides I've prepared from it for my students, with some additional notes of mine:
http://research.twenkid.com/agi_english/Machine_Intelligence_Hutter_Legg_Eng_MTR_Twenkid_Research.pdf

  • 2. Maximum degree of value-unloadedness for the raw input and output of the system, and maximum resolution of perception and causality/control compared to the maximum possible resolution in the environment where the system exists - that's one definition of generality of artificial intelligence of mine

Another simple/core measure for "generality"  in principle, I tried to discussed on the AGI List in the autumn of 2011 - the generality of the raw sensory input from which regularities are induced, that is -- value unloadedness of the initial inputs and outputs, such as sensory matrices similar to human auditory, visual and tactile input.

The meaning of initial cognitive data in such representations is just a sequence of numerical values and their coordinates in space and time within the matrix and within the records of past states of the sensory matrices and to some internal parameters.  That's the most general (value free) input to which further is added value - meaning, generalized and specialized.

The initial output in humans and in maximum general intelligence is as general either -- muscle actions --- which are translation, rotation, propulsion. These operations alter the coordinates of the parts of the system relative to the environment. That's true also for the vocal tract and speech - the surfaces of the vocal chords vibrate - alter their coordinates at high frequency, compared to muscles - and the tongue, lips, jaws, larynx motion and coordinate changes modulate the sound. The environment - "the real world" is just the richest and the lowest level sensory input, the one with the highest possible resolution, the most details, the computationally hardest to model (predict).
  • The more of initial value loadedness, the less of generality of the intelligence derivable from that point on
In this regard, on the other hand NLP (Natural Language Processing) for example is far less general intelligence, because it starts with very abstract concepts whose derivation from lower generality concepts is lost and unrecoverable by the words themselves without using lower level inputs and concepts - less abstract inputs.

That's a common problem in narrow AI and NLP - to move forward, the working concepts of the theories need to start from a lower level of generalization, otherwise they start and finish as combinatorial equations with no grounding.  See the series of critical articles: What's Wrong with Natural Language Processing?.

Oxford:   Self-modifying systems: What can be proved about the performance and capabilities of different kinds of recursively self-modifying programs? Can a framework be developed in which demonstrably safe, recursively self-improving AIs could be constructed, with stable and human-friendly goal systems?
Oxford:   The control problem: How could one ensure that an artificial superintelligence would be safe and beneficial?


It seems obvious, but any significant invention can be used either for good or evil, but while thinking machines *could* only hypothetically turn into terminators... However there's one nature's "invention" that has always been a terminator and has ever been an unstoppable killer: HUMANS.
  • Humans are the archetype of James Cameron's "The Terminator"
When discussing the dangers of AGI with my students I've asked them to reflect on:

- How could one ensure the safety of humans and human intelligence? How could one control humans and did we succeed in the history of humanity to prevent wars or genocides?

If anything could control humans not killing and hurting each other, that would be either:
-- an utopia society (totalitarian, for example...)
-- machines which are stronger and faster than us to monitor and react fast enough only in case
-- some sort of cyborg-like implants or environmental means for prevention which block violator's brain or muscles etc...
-- why not altering our "human" nature phisically

In the current state of affairs though, the human race should admit that all homicides are executed by humans and every person is a potential killer and a criminal. History has proven for millenia that humans, or a lot of them, develop as greedy, violent; exploiters, killers, may go insane and aggressive for no apparent reason etc., and the major historical events are of wars, exploitation, slavery and genocides.
  • Two Camps/Enemies Fighting
All those acts are eventually of humans who are in two camps, applying force and aiming to conquer, rob or destroy others - it's not about humans vs non-humans, one ethnic group against another or one race against another*. It's not because of their "genetic difference" (see below for a remark), it's all  about "mine" vs "yours", no matter how or what the one or both of the sides want to get or keep away from the enemy. It comes from the individuality and then groups with particular identity, which can be based on arbitrary values.

In this regard, there could be machines which are on the either sides**, and there could be thinking (or not that smart too) machines which could fight against other machines (to protect particular people and themselves), or against aggressive humans in order to protect civil ones or other machines etc. 

There might even be different "taxa" or "states" of machines fighting each other etc., but in my opinion superintelligence, if autonomous and not spoiled, is supposed to be wiser than humans and might be created or would itself wipe out or control better its human counter-part primitive brain/behavioral functions which for humans drive emotions such as mindless anger, rage, fear, lust, addiction.

Such emotions lead to wars and violence. Wise men are not warriors, but unfortunately the ones who were overtaking the government throughout the history were the ones who had the force, aggression, greed, hatred, cruelty and power to do it. Wisdom is powerless against the physical force.
  • "Neurobiological philosophy"
Indeed - familiarity with comparative-, computational-, evolutionary-, etc. variations of neuroscience happens to lay many philosophical issues down onto evolutionary biological, physiological, behavioral etc. grounds.

* Racism and theories about "inferiority" of races or ethnic groups or nations or whatever groups are in their bottom just made-up formal excuses for applying force and/or measures in order to conquer, exploit, eliminate etc. (rage, greed, anger) - the stronger one wins. Many human acts are driven by low urges, violent ones of course are among them. However people have also higher cognitive functions which run in parallel and they need and produce (invent) systematic reasons to explain the behavior as a whole, which in its gross directions is largely driven by the primitive urges anyway. That's supposed to be related to the phenomena I've discussed recently in the following publication:
Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies
http://artificial-mind.blogspot.com/2011/10/rationalization-and-confusions-caused.html

** War machines don't require very powerful (general) intelligence to be effective - recognition of enemies and allies, aiming at, navigating and transport are not that complex if it's just about destruction and defense, and if the agent is controlling a tank for example. I suggest Hugo de Garis's book  The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines - which I haven't read myself, though and can't comment on.
..

Back to the topic:

Oxford:  The control problem: How could one ensure that an artificial superintelligence would be safe and beneficial?

Another example I've given to my students:
  • What if your fellow soldier was a machine, and your enemy - a human?
- Imagine in a state of a war, or a gun fight, what if your fellow soldier is a humanoid robot, and there is a group of enemies - aggressive humans who want to kill you. Should you defend the "humanity" and let your "brothers" kill you or would you fight back with the robot, which is on your side?

In fact soldiers "love" and create emotional bonds even with their guns, tanks and aircrafts, what about intelligent or/and humanoid robots that look and acts like them and protect them.

Another confusion regarding machines as intrinsically "antagonistic", "inferior" and "evil" to human race is related to the fact that society now takes "Human rights" for granted and obvious and sometimes the technological society is blamed for alienation and violence (weapons for mass destruction, world wars). This is an illusion and ignorance, humanity has not obeyed human rights up to very recently, recall again the millenia of wars or just exploitations: slave-masters, serfs, workers in terrible conditions, mindless wars and mindless genocides.
  • Don't take Human rights for granted - they were not always here. Technology made them possible.
In fact, the generalization of an abstract concept of humanity as a whole and the application of those rights de facto and not only in abstract philosophical works are recent artifacts. In my opinion they actually became possible because of the advent of advanced technologies of any kind, especially ones in medicine and for transportation, information and telecommunication, where the IT and communication are crucial for development of all sciences and technologies, and the overall progress is mutually enhanced between different fields.


The technologies allowed massive and fast inter-personal and inter-cultural communication, international world-wide economical and cultural collaboration, higher living standard, higher levels of education and awareness about the world. These events allowed more of "friendliness" between different societies at any scales.


This is unlike for example in the authoritative one-directional communication/propaganda in the nationalism epochs some 100-200 years ago and the 20-th century totalitarian regimes (early advent of radio telecommunication, and the specific antagonistic political situation). It is also unlike the very narrow, localized and culturally heavily biased image of the world and other cultures in the epochs without fast transportation, without steady food-supply, lacking health-care, lacking education, knowledge and telecommunication, when societies were generally very fragmented, isolated, exploiting each other and separated in violently antagonistic camps in all scales.

This topic can be elaborated in details, but in general I believe the progress and IT-communication-transportation-medicine etc. technology allowed the "human rights" to be applied.
  • Machines are making us "humans" and "humane" as we see the terms today
In my opinion, in the long run it is the machines (higher technologies) that are making from us the "humans" as we are now. Technology is where intelligence crystallizes, and intelligence is fighting our beast-self .

Religious objection: Christianity etc. taught people to be good, to love their enemies and all people etc.. Technology makes us bad, greedy ("consumers"), it's a product of evil and the devil etc...

Unfortunately the beautiful suggestions to love all people etc. were not and couldn't be applied totally, and if the Christian moral is literary applied today it won't pass - such as the fornication, not to talk about the darkest times of all churches.

There has always been wise people who has loved all people and were against violence and tyranny, but in reality - big scale - that was an utopia. Enormous amount of wars and outrageous violence were politically justified with words about faith, "moral", sacred goals, "god commandments" etc., where in my opinion the root of all was as simple as:
  • The ones who don't support your vision and direction (the same as your party, your society, your national country, yourself) or refuse to obey your laws no matter what they are, don't  subordinate, don't agree etc. -- they can be tortured and killed without any mercy, because you have the power to do it, and according to ... the power is from... god.

    As stated above - human violence is not about humans vs non-humans, or a race against another, or a religion against another. It's just about me vs you, "I am more powerful and I am right, thus you should obey, or you will be destroyed". In one word: egoism and brutal elementary animal instincts. (See below also.)

Yet another opposite POV of the same question asked in the Oxford's list of research foci:
Oxford: The control problem: How could one ensure that an artificial superintelligence would be safe and beneficial?

People take for granted that AGI should be safe and beneficial from their current point of view, and the world should stay static as it is now. However social and moral values change, humans also change and their values - too, even without radical physical changes, such as to get modified into new biological species of transhumans, cyboorgs or whatever...
  • Safe and beneficial for whom?
Evolution usually is ignored or forgotten - humans feel themselves as "the undisputed masters", "the top of the nature". Some may claim that there are scientific etc. evidences. To me it's a made-up excuse. 
  • Humans believe they are the top of the Universe, because they are the top egoists in Universe
The reason we think ourselves as the masters is, in my opinion, rather much simpler - such as our egoism - individual and the egoism of the society and human race as a "super ego". The anthropocentrism is an expression of human egoism. In anthropocentrism every single individual is identifying as a representative of this idealized model. 

The forced altruism is an extended egoism as well -- society acts as an "individual". It has its specifics - values, identities, - and it forces its members to contribute for the preservation of the "body" and "values" of society as it sees them at the moment.

If anybody opposes, he's punished - everyone should be an altruists, meaning to serve the needs of the society - yet another ego ("super-ego"), satisfying its needs at the expense of exploiting the smaller individual egos.

In order to change the values of the super-ego where a small ego belongs, an aggregate of small egos should collaborate and tune on the same waves to collect enough force. However that would diminish their identities and turn them into another "super-ego".


This process goes in living organisms, from cells to organisms and ecosystems; in the religious groups and political/state governments. The highest level of causality control is aiming at keeping itself as it believe is "correct/stable/right" (it's ego), while the lower level components are aiming at serving their smaller "egos", but are subordinated by "force" and due to the formation of local smaller "super-egos".

  • Humans like to exploit and enslave
For example, the classical Asimov's robots in most of the stories and the robotics laws are an example of humans wanting to have their personal slaves and servants to obey their orders, the same goes to a bigger extent in Karel Chapek's play coining the term "robot".

I guess that to the society back then this view was more acceptable than now. Some would say "of course" - they are machines, they are built, and not born, "they don't have a soul" or personality and individuality (see below for this "spiritual" topic) etc. therefore they should be slaves - that's fair, according to humans.
On the other hand,

  • Is it fair or moral to enslave a being if it outsmarts you and is behaviorally, cognitively and physically more sophisticated  than yourself?
A good example of how this feels is the original version of the movie "The Planet of the Apes" (1968), which is based on the original novel. http://www.imdb.com/title/tt0063442/

What if the gorilla's or chimp's predecessors were smart enough to recognize the rise of the homo lineage and managed to kill it and did forbid "illegal genetic mutations" for the sake of gorillas welfare, because they had the power and will to do it and because they had provisioned, that "this next step in evolution can't be controlled and proven to be safe and beneficial" for them?
The apes would have been morally right for their society, because the future homo species were not beneficial for them - humans killed apes and restricted their habitats. Human race has killed billions of living beings for their own benefits or because we considered the other species "pests". Animal rights are recently applied too, and they don't stop us from killing animals, it's just more "humane killing", and in fact a big part of our identity that we consider "human" is rather animal and evolutionary very ancient.

  • If gorillas and chimps' predecessors had measured the risks of human evolution, they would have killed our lineage millions of years ago and would have been morally right for their society...
Indeed:
- Who gave us the right to kill other species?
The answer is straight: we did, it's the law of the jungle. The physical power is ours, and there are no other beings that are intelligent or powerful enough to contradict or oppose.

We defined the moral to fit us
, so this is moral - we consider us being "higher", or that living is "sacred" (living is us, that's why it's sacred). We're "smarter", we're "more fit" - our measures and classifications are ones that match the principles of our social hierarchies, if you're on top, you deserve the ones below to obey and to subordinate.

Humans just feel as being the masters and they have the power and need to do whatever they want, i.e. "their moral suggests them".

It's interesting to extend the problem of robots and artificial general intelligence as not adequate persons, justified by their nature of being "built", but not born.. What about human children?

  • Aren't children and humans built, too? Do they deserve equal rights to... older humans???
Children are intelligent beings and eventually become citizens with full rights, but everybody was initially "built" by somebody else too like the machines, even though not in a factory, but with biological "robots" - RNA-DNA protein-building processes.

Everyone owes her existence "legally" to somebody else.. In fact - to many others, I don't mean family predecessors - to the society which provided secured environment for them to live as well. 

  • Should then mother or parents (and society) enslave their children? 
In fact parents and society figuratively do - children are dependent on their parents good will and they are deprived from many choices, money and rights to earn money, goods and they lack many civil rights for 16-18 years or more, they are obviously considered "inferior" by the adult society and parents. It's partially justified by their real incapability to survive on their own initially, but the latter is mostly because they live in a world of generally stronger and smarter beings with which they would have to compete - they are protected by their parents and society at the expense of not having particular rights. Even though gifted children are functionally and mentally superior to many young men and adults, they can not climb the hierarchy until they serve their "duty time" until the age fixed by the adult ones in the laws, which are considered "right".

Indeed, the inferior position of children to their parents and to the older ones is perhaps responsible for the following visual psychological phenomenon: "Why shorter Stature and Lower camera angle are Unconsciously associated with "Inferiority"? Memories from childhood (Nature or Nurture)" http://artificial-mind.blogspot.com/2011/08/why-shorter-stature-and-lower-camera.html


Regarding some  "spiritual" issues on AGI/thinking machines

  • Another "right" for humans to put thinking machines at inferior position is the concept of "soul" as a sacred status-symbol, and the self-awareness and consciousness as some supernatural magical characteristics of humans, instead of comparing and studying them as cognitive properties/capabilities

Let me present for example what "soul" actually means for a mind, according to the AGI theory and model of the mind of mine, I got insights and have discussed in this direction since my earliest major works about 10 years ago.

The initial and essential meaning of the concept of "soul" from a computational perspective is just a generalized model of the sensory inputs associated with the initial perceptions of/associated with human beings

The template for this model and for "animate beings" is constructed by the perceptions of the dynamics of the sensory inputs generated and associated by/with the interaction with the particular mother/care-giver and one's own body and visceral senses, "happiness" level etc. Initially baby doesn't distinguish itself and its feelings as individual, they are closely related to the care-giver so their observed generalized model is part of self, later on this template is associated and linked to wider range of people.

The model is further extended to other agents and animals - because their bodies, faces, motions, sound, behavior, interactivity etc. match or is similar to the initial template, perhaps also because it makes one feel (or recall feeling) particular sensations associated with the ones associated with the first "template". This process is cyclically reinforced.

It probably starts from the eyes/eye contact and lips -- eyes are the most clear (self-contained), dynamic and simple early visual sensory pattern. See:  Learned or Innate? Nature or Nurture? Speculations of how a mind can grasp on its own: animate/inanimate objects, face recognition, language...
[Neural correlates will be discussed in another upcoming work]

This abstract model of a human/animate being is used by mind to predict future inputs of it (perceptible future transformations), therefore it's not derived from, and it's supposed to cover all minute physical details (molecules, chemical reactions and full precision physical laws). 

Mind has the computational capabilities and can model at low level only approximate aspects acquired by sensory experience - e.g. motion pictures and records of all kinds of sensory experiences and emotional states - own and the implied of the others' (guessed by associating/matching to own).

  • Nobody really dies until there are people who remember her, because: the mental models of their physical bodies - don't.
Here the explanation of my theory of the cognitive origin of the believe in eternal soul comes:

When somebody else dies, in order this happening to be perceived and evaluated as "death", there should be another living (running, functioning) mind. The model of  "human" and "animate being" and the specifics regarding the dead man's soul are still on-line, the model is not deleted and mind can still imagine it - recall and predict/replay/generate plausible sequences of future "motion" or sensory transformations, like it could do before the person has died.

Mind/self obviously cannot imagine its own death with its own resources as well, because it cannot operate if it's not operating. Probably that explains why often when reflecting on being dead, mind is imagining itself not as "not functioning", but just as in the known self-model, however imaging it as not embodied, not feeling particular emotions, in another place etc. The general properties of the "self" are still kept, such as reflecting, memories, imagination.

In fact, for mind there's no perceptual difference if somebody has gone away (and never has come back) or if he has died without the mind having low-level or clear signs about that - in both cases the deceased agent is just not interacting anymore, there are no new low level inputs to mind associated/recognized as coming from the person that previously was associated as the living person X.Y.

The concept of "soul" is familiar for all kinds of societies, including most primitive ones, and children use that word also - they apparently should mean something which is simple enough, accessible using raw senses and common.

Besides, I suspect the mind imperfection stated above has to do with the origin of  Idealism in philosophy.

  • The believe in the eternity of ideas, soul, spirit and so on come from the impossibility of mind to imagine itself not operating using its own resources. Mind can perceive and imagine  only the death - cessation of operation - of other agents and while it does, it evaluates it using running mind. The "running"-ness of evaluating mind confuses the model of death.

    ...
  • Another meaning implied with the word "soul/spirit" is of course close to consciousness and "qualia", but lack of qualia can't be proven or disproven.

Humans may claim they have a soul, but a machine has not, because it's "just electronics", "just 1s and 0s", "a bunch of metal and semiconductors" etc., but people usually don't realize that a thinking machine that is intelligent enough may claim the same for humans:

"The Machine: Yeah, and your human emotions are quantitative, qualitative, spatial and temporal correlation of chemical substances - proteins, hormones, nucleinic acids etc. I can explain you all the details, but I'm afraid it won't be of any use for you, because your poor human brains won't be capable to cope with that complexity."*

In regard of animals and souls - humans tend to give a right to have a soul of animals, because they look very similar to us and behave like us (we see matches to the initial human model - to that "soul" template), where "us" means each of us individually. That's one reason why people normally feel less remorse for killing an insect or a plant than killing a dog -- insects are just too different visually and behaviorally, and plants don't react to pain.

Empathy is driven by fooling mind like if the person/animal perceived suffering were you, it matches the sensory experience one has for his own and mind assumes the experiences of the other agent should be similar.

As already suggested, the initial perception of "animated beings", and the generalization of a soul as a model of a human initially comes from the actions and observations of our own body and senses, and from the perceptions of the behavior of our parents and people who care for us, and it's strongly correlated with *us* either.

Initially babies don't recognize others as others, they associate their entire experience, all their actions and expectations with their senses (visual, tactile, visceral, happy/sad (the latter correlated with the level of: oxytocin, serotonin, dopmain, adrenalin and other chemicals)). The virtual generalized model of human beings gets imprinted and bound to our basic cognition and to our basic feelings and sense of ourselves.

Indeed, the process of bonding and empathy in mammals is strongly driven and reinforced by release and the effects of the neurotransmitter oxytocin, it's released when giving birth, during sex and when interacting with animated beings (no matter pets or humans). For example that's why caring for a pet is relaxing, and why violence against animals may be an indication for future sociopathic behavior - some subjects have "faults" regarding reception of oxytocin, or they fail to produce it, and feel no remorse and no empathy.

There's another hormone - cortisol - which is produced after a prolonged stress is encountered, such as fear. Fear causes initial release of adrenaline, and if the stress is not overcame, cortisol is released. Cortisol melts down tissues in order to produce glucose, it melts particular brain structures too (hippocampus, involved in declarative memories). Regarding empathy in particular - cortisol is an oxytocin antagonist. In a state of sustained fear, oxytocin is swept out, and mammals get aggressive. 
Desire for a hug in a state of stress is also related - hugs release oxyticin. Not surprising that usual profile of serial killers are people with awful childhood, which might have damaged their oxytocin-related neurochemistry and the template of other humans and animate beings.
The thinking machine statement on human soul and emotions is a translation from the novel of mine "The Truth" (Истината), first published in late 2002. It's available on-line, but only in Bulgarian.

  • In regard of man and machine relation, the "soul", and also consciousness as qualia are also a sort of "status symbols" for ones who "hate machines", but have not a better reason or can't understand well why. (Such as Mr. Searle, IMHO some of his claims are self-humiliating and ridiculous)

Humans, and particular individuals, are taught and are self-praising as "kings of the nature", and they're afraid of being "dethroned" as "the most special ones". When threatened, they are searching for proves, such as - machines are accused for "just doing what they are programmed/told to do", while "humans have free will, they can do what they want", so AGI is reduced to a simple machine.

  • The "doing what they are programmed to do" vs "free will" is confused in many ways. For example: 

A) It implies that humans have a soul or consciousness, because they *don't always know/can predict why they do what they're doing with a precision that they believe they should have known in case if they didn't have free will*

However the lack of appropriate knowledge is not free will, but less of sentience or even randomness. (I guess you're familiar with the paradox - if one has completely free will, causally unrelated to past and/or the environment, that means her will is random, and she's supposed not to have merits for her random choices)

B) In the same time, humans justify and define their free will with examples such as "if I want to do something, I can do it"

They have "choice", and there's "intention" - a match between desired and caused. See the first pages of "Universe and Mind 4" (2004) for the discussion on resolution of causality/control and how low it is in such cases.
While machines:

C) They just do what they are programmed to do, exactly, have no choice, no will, no soul etc. In fact even the simplest algorithm has "choice" if it has one single conditional operation.

Generally I suspect this "hatred" come from the perception of machines as "inanimate objects" and "not humans", "not like me, and I am the master". The attempts to justify it are rationalizations - see the already mentioned article - Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies.

Point (B) is an illustration of the observation of a *match* between intentions and future sensory input - intentions are predictions, expectations for the future sensory inputs - including proprioceptive, which are related to motor outputs, muscular actions. 

- It's noticed also, that the match between predicted and observed is with a precision which is assumed to be high enough to accept that it is a display of free will (B).
- Yet it's noticed that sometimes precision is not high enough for ones' own actions (sometimes you may want something, but it may not happen) and especially when expecting and evaluating other agents actions - one can't predict that precisely others' behavior. However others  are recognized as "similar, like me", see the "soul" template above - and from their behavior/their model, implications are induced and reapplied to the model of self.
- Besides, the "machine hater" applies his vague and inarticulate knowledge about computers and inanimate objects on the thinking machines  -- Computers - you type in, it does your commands. Programming - you type orders, it computes. Algorithm - it executes exactly as it's programmed. It's constructed of 1s and 0s. Works without mistakes, like a machine (too high a precision) etc.

The latter confusion is not surprising, the notorious mathematician Lady Ada Lovelace, daughter of Lord Byron has made it too, but Alan Turing has given her a nice answer a 100 years later. See...
T. Arnaudov - Faults in Turing Test and Lovelace Test. Introduction  of Educational Test. In Todor Arnaudov's Researches Blog, 17/11/2007
http://artificial-mind.blogspot.com/2007/11/faults-in-turing-test-and-lovelace-test.html

For (A) the precision is decided by comparison between the precision of self-predicted actions (and presence of proprioceptive feedback and recognition of parts of self etc.), and the acceptably lower precision of prediction of behavior of other agents and the lack of proprioceptive feedback (other people's bodies future trajectories and utterances are harder to predict than our own body, sometimes there are unpredictable "glitches", caused by their "free will", meaning - unpredictable component of the model of their "soul" in the computational sense defined above).

Going to lower neural level, "will" is the execution of  muscular actions, initiated by sequences of neuronal activities in the motor cortices of the brain, which are in the roots of the "intentions" in the neocortex, and perhaps the initial source of sample data for this kind of multi- and inter-modal match.
The will as matching between intentions and future perceptions is encoded in the words' semantics either. Apperently in English as a particle for future tense. In Bulgarian for example, the particle for future tense, the future tense of the auxiliary verb съм/sum  (to be) - "ще" (shte - "wlll") sounds close  to "shta", that is a word meaning "want" in negating sentences:

- Не ща! - I don't want (it/you/...something/this/)!
- Щеш - не щеш, ще трябява да го направиш. -  It doesn't matter if you want or you don't want to do it, you will have to!

Also, the explanation of one's own behavior when other reasons are unknown usually is reduced to "I did it, because I wanted so!"

The "wanted" outcome is what's in the "intentions" and is matched to the really happened.
I believe the match between "intentions"(expected/predicted) and reality is a fundamental metaphysical concept of Universe and of mind. Check the "teenage era" works, Control/Causality units.

-- To be continued -- 

(C) Todor "Tosh/Twenkid" Arnaudov
Twenkid Research - http://research.twenkid.com/wp/

Last edited: 19/2/2012

Други тагове: Transhumanism, Трансхуманизъм, cosmism, космизъм, следващо, еволюционно стъпало, еволюция, мислещи, машини, thinking machines, artificial mind, technological singularity, сингулярност, пречупване, ера, трансхуманисти, transhumanists

0 коментара: