By Todor "Twenkid/Tosh" Arnaudov
* For the followers of the blog - that's not the paper announced in the previous post.
The expression and sharing of the following reflections and speculations is inspired by a publication for a position offered at the FACULTY OF PHILOSOPHY, Oxford University, published on the AGI List (thanks to Sean O'Heigeartaigh).
See the pdf document, the research foci on p. 2, section 5:
5. This is a research position focused on topics related to the long-term future of machine intelligence and AI. Relevant research foci include:
- Studying paths towards strong AI: Monitor the current state of progress in the field; identify milestones, roadmaps, etc.
- Machine intelligence: Analyze fundamental concepts—e.g. how to define and measure general intelligence in artificial systems; how to distinguish different kinds of goal-seeking systems.
- Self-modifying systems: What can be proved about the performance and capabilities of different kinds of recursively self-modifying programs? Can a framework be developed in which demonstrably safe, recursively self-improving AIs could be constructed, with stable and human-friendly goal systems?
- The role of big data: What impacts and applications will arise from the increasing availability of enormous data sets? What are the fundamental tradeoffs between processing power and data? Is there a minimum amount of data that an arbitrarily powerful intelligence would need in order to effectively deal with various task domains? Can one quantify how much information the human brain contains at birth to enable it to develop general intelligence? Can one quantify how “difficult” it was for such a system to evolve?
- Philosophy of computer science and of AI: What does it mean to implement a computation? Would a “Boltzmann Brain” or a “Swampman” be conscious?
- The control problem: How could one ensure that an artificial superintelligence would be safe and beneficial?
- Can one analyze classes of possible utility functions and construct general statements about the behavior of expected utility-maximizing superintelligence based upon utility functions of a particular class?
I also aim to match my theoretical speculations and insights with all accessible kinds of neuroscientific, evolutionary, cognitive science, behavioral etc. evidences I'm aware of at the time. Often I discover such evidences later. For example ~10 years ago, while not being familiar with neuroscience as now, appropriate reflections and a basic understanding in developmental psychology allowed me to induce the existence and functional distinction of hippocampus and the neocortex using only behavioral evidences. See: http://artificial-mind.
Teenage Theory of Mind and Universe (Excerpts, Part 1) - Theoretical Induction of Neocortex and Hippocampus using Consciousness; Compression
Regarding the philosophical problems, some of my oldest work include analyses of questions, for example related to:
-- Why a mind believes the soul is immortal and should exist forever?
-- How death is actually perceived from a thinking machine and human perspective and what humans are really afraid of?
-- What actually humans (intuitively) mean with "free will" and why humans insist they have and machines would not?
-- What actually the concepts of hell and heaven illustrate about human mind operation, why so many religions have such concepts, how a cognitive system invents them, why it's so strong?
-- Discussions about consciousness, what is it from different perspectives.
Regarding those sample directions in section 5., I'll use the opportunity to give some contrary statements, from the point of view of the thinking machines, that would be smart enough to unveil our biases and self-praising believes about humans "undisputed" moral superiority.
Oxford: Machine intelligence: Analyze fundamental concepts—e.g. how to define and measure general intelligence in artificial systems; how to distinguish different kinds of goal-seeking systems.
- 1. Hutter and Legg's Definition of Machine Intelligence, and the Educational Test
There is serious work already done, see for example M. Hutter and Shane Legg's paper below. These are slides I've prepared from it for my students, with some additional notes of mine:
- 2. Maximum degree of value-unloadedness for the raw input and output of the system, and maximum resolution of perception and causality/control compared to the maximum possible resolution in the environment where the system exists - that's one definition of generality of artificial intelligence of mine
Another simple/core measure for "generality" in principle, I tried to discussed on the AGI List in the autumn of 2011 - the generality of the raw sensory input from which regularities are induced, that is -- value unloadedness of the initial inputs and outputs, such as sensory matrices similar to human auditory, visual and tactile input.
The meaning of initial cognitive data in such representations is just a sequence of numerical values and their coordinates in space and time within the matrix and within the records of past states of the sensory matrices and to some internal parameters. That's the most general (value free) input to which further is added value - meaning, generalized and specialized.
The initial output in humans and in maximum general intelligence is as general either -- muscle actions --- which are translation, rotation, propulsion. These operations alter the coordinates of the parts of the system relative to the environment. That's true also for the vocal tract and speech - the surfaces of the vocal chords vibrate - alter their coordinates at high frequency, compared to muscles - and the tongue, lips, jaws, larynx motion and coordinate changes modulate the sound. The environment - "the real world" is just the richest and the lowest level sensory input, the one with the highest possible resolution, the most details, the computationally hardest to model (predict).
- The more of initial value loadedness, the less of generality of the intelligence derivable from that point on
That's a common problem in narrow AI and NLP - to move forward, the working concepts of the theories need to start from a lower level of generalization, otherwise they start and finish as combinatorial equations with no grounding. See the series of critical articles: What's Wrong with Natural Language Processing?.
Oxford: Self-modifying systems: What can be proved about the performance and capabilities of different kinds of recursively self-modifying programs? Can a framework be developed in which demonstrably safe, recursively self-improving AIs could be constructed, with stable and human-friendly goal systems?
Oxford: The control problem: How could one ensure that an artificial superintelligence would be safe and beneficial?
It seems obvious, but any significant invention can be used either for good or evil, but while thinking machines *could* only hypothetically turn into terminators... However there's one nature's "invention" that has always been a terminator and has ever been an unstoppable killer: HUMANS.
- Humans are the archetype of James Cameron's "The Terminator"
If anything could control humans not killing and hurting each other, that would be either:
-- an utopia society (totalitarian, for example...)
-- machines which are stronger and faster than us to monitor and react fast enough only in case
-- some sort of cyborg-like implants or environmental means for prevention which block violator's brain or muscles etc...
-- why not altering our "human" nature phisically
In the current state of affairs though, the human race should admit that all homicides are executed by humans and every person is a potential killer and a criminal. History has proven for millenia that humans, or a lot of them, develop as greedy, violent; exploiters, killers, may go insane and aggressive for no apparent reason etc., and the major historical events are of wars, exploitation, slavery and genocides.
- Two Camps/Enemies Fighting
Such emotions lead to wars and violence. Wise men are not warriors, but unfortunately the ones who were overtaking the government throughout the history were the ones who had the force, aggression, greed, hatred, cruelty and power to do it. Wisdom is powerless against the physical force.
- "Neurobiological philosophy"
* Racism and theories about "inferiority" of races or ethnic groups or nations or whatever groups are in their bottom just made-up formal excuses for applying force and/or measures in order to conquer, exploit, eliminate etc. (rage, greed, anger) - the stronger one wins. Many human acts are driven by low urges, violent ones of course are among them. However people have also higher cognitive functions which run in parallel and they need and produce (invent) systematic reasons to explain the behavior as a whole, which in its gross directions is largely driven by the primitive urges anyway. That's supposed to be related to the phenomenons I've discussed recently in the following publication:
Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies
** War machines don't require very powerful (general) intelligence to be effective - recognition of enemies and allies, aiming at, navigating and transport are not that complex if it's just about destruction and defense, and if the agent is controlling a tank for example. I suggest Hugo de Garis's book The Artilect War: Cosmists vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines - which I haven't read myself, though and can't comment on.
Oxford: The control problem: How could one ensure that an artificial superintelligence would be safe and beneficial?
Another example I've given to my students:
- What if your fellow soldier was a machine, and your enemy - a human?
In fact soldiers "love" and create emotional bonds even with their guns, tanks and aircrafts, what about intelligent or/and humanoid robots that look and acts like them and protect them.
- Don't take Human rights for granted - they were not always here. Technology made them possible.
The technologies allowed massive and fast inter-personal and inter-cultural communication, international world-wide economical and cultural collaboration, higher living standard, higher levels of education and awareness about the world. These events allowed more of "friendliness" between different societies at any scales.
This is unlike for example in the authoritative one-directional communication/propaganda in the nationalism epochs some 100-200 years ago and the 20-th century totalitarian regimes (early advent of radio telecommunication, and the specific antagonistic political situation). It is also unlike the very narrow, localized and culturally heavily biased image of the world and other cultures in the epochs without fast transportation, without steady food-supply, lacking health-care, lacking education, knowledge and telecommunication, when societies were generally very fragmented, isolated, exploiting each other and separated in violently antagonistic camps in all scales.
- Machines are making us "humans" and "humane" as we see the terms today
Religious objection: Christianity etc. taught people to be good, to love their enemies and all people etc.. Technology makes us bad, greedy ("consumers"), it's a product of evil and the devil etc...
Unfortunately the beautiful suggestions to love all people etc. were not and couldn't be applied totally, and if the Christian moral is literary applied today it won't pass - such as the fornication, not to talk about the darkest times of all churches.
There has always been wise people who has loved all people and were against violence and tyranny, but in reality - big scale - that was an utopia. Enormous amount of wars and outrageous violence were politically justified with words about faith, "moral", sacred goals, "god commandments" etc., where in my opinion the root of all was as simple as:
- The ones who don't support your vision and direction (the same as your party, your society, your national country, yourself) or refuse to obey your laws no matter what they are, don't subordinate, don't agree etc. -- they can be tortured and killed without any mercy, because you have the power to do it, and according to ... the power is from... god.
As stated above - human violence is not about humans vs non-humans, or a race against another, or a religion against another. It's just about me vs you, "I am more powerful and I am right, thus you should obey, or you will be destroyed". In one word: egoism and brutal elementary animal instincts. (See below also.)
Yet another opposite POV of the same question asked in the Oxford's list of research foci:
People take for granted that AGI should be safe and beneficial from their current point of view, and the world should stay static as it is now. However social and moral values change, humans also change and their values - too, even without radical physical changes, such as to get modified into new biological species of transhumans, cyboorgs or whatever...
- Safe and beneficial for whom?
- Humans believe they are the top of the Universe, because they are the top egoists in Universe
If anybody opposes, he's punished - everyone should be an altruists, meaning to serve the needs of the society - yet another ego ("super-ego"), satisfying its needs at the expense of exploiting the smaller individual egos.
In order to change the values of the super-ego where a small ego belongs, an aggregate of small egos should collaborate and tune on the same waves to collect enough force. However that would diminish their identities and turn them into another "super-ego".
This process goes in living organisms, from cells to organisms and ecosystems; in the religious groups and political/state governments. The highest level of causality control is aiming at keeping itself as it believe is "correct/stable/right" (it's ego), while the lower level components are aiming at serving their smaller "egos", but are subordinated by "force" and due to the formation of local smaller "super-egos".
- Humans like to exploit and enslave
I guess that to the society back then this view was more acceptable than now. Some would say "of course" - they are machines, they are built, and not born, "they don't have a soul" or personality and individuality (see below for this "spiritual" topic) etc. therefore they should be slaves - that's fair, according to humans.
- Is it fair or moral to enslave a being if it outsmarts you and is behaviorally, cognitively and physically more sophisticated than yourself?
What if the gorilla's or chimp's predecessors were smart enough to recognize the rise of the homo lineage and managed to kill it and did forbid "illegal genetic mutations" for the sake of gorillas welfare, because they had the power and will to do it and because they had provisioned, that "this next step in evolution can't be controlled and proven to be safe and beneficial" for them?
- If gorillas and chimps' predecessors had measured the risks of human evolution, they would have killed our lineage millions of years ago and would have been morally right for their society...
We defined the moral to fit us, so this is moral - we consider us being "higher", or that living is "sacred" (living is us, that's why it's sacred). We're "smarter", we're "more fit" - our measures and classifications are ones that match the principles of our social hierarchies, if you're on top, you deserve the ones below to obey and to subordinate.
- Aren't children and humans built, too? Do they deserve equal rights to... older humans???
Everyone owes her existence "legally" to somebody else.. In fact - to many others, I don't mean family predecessors - to the society which provided secured environment for them to live as well.
- Should then mother or parents (and society) enslave their children?
Indeed, the inferior position of children to their parents and to the older ones is perhaps responsible for the following visual psychological phenomenon: "Why shorter Stature and Lower camera angle are Unconsciously associated with "Inferiority"? Memories from childhood (Nature or Nurture)" http://artificial-mind.
- Another "right" for humans to put thinking machines at inferior position is the concept of "soul" as a sacred status-symbol, and the self-awareness and consciousness as some supernatural magical characteristics of humans, instead of comparing and studying them as cognitive properties/capabilities
The initial and essential meaning of the concept of "soul" from a computational perspective is just a generalized model of the sensory inputs associated with the initial perceptions of/associated with human beings.
The model is further extended to other agents and animals - because their bodies, faces, motions, sound, behavior, interactivity etc. match or is similar to the initial template, perhaps also because it makes one feel (or recall feeling) particular sensations associated with the ones associated with the first "template". This process is cyclically reinforced.
It probably starts from the eyes/eye contact and lips -- eyes are the most clear (self-contained), dynamic and simple early visual sensory pattern. See: Learned or Innate? Nature or Nurture? Speculations of how a mind can grasp on its own: animate/inanimate objects, face recognition, language...
- Nobody really dies until there are people who remember her, because: the mental models of their physical bodies - don't.
When somebody else dies, in order this happening to be perceived and evaluated as "death", there should be another living (running, functioning) mind. The model of "human" and "animate being" and the specifics regarding the dead man's soul are still on-line, the model is not deleted and mind can still imagine it - recall and predict/replay/generate plausible sequences of future "motion" or sensory transformations, like it could do before the person has died.
Mind/self obviously cannot imagine its own death with its own resources as well, because it cannot operate if it's not operating. Probably that explains why often when reflecting on being dead, mind is imagining itself not as "not functioning", but just as in the known self-model, however imaging it as not embodied, not feeling particular emotions, in another place etc. The general properties of the "self" are still kept, such as reflecting, memories, imagination.
In fact, for mind there's no perceptual difference if somebody has gone away (and never has come back) or if he has died without the mind having low-level or clear signs about that - in both cases the deceased agent is just not interacting anymore, there are no new low level inputs to mind associated/recognized as coming from the person that previously was associated as the living person X.Y.
- The believe in the eternity of ideas, soul, spirit and so on come from the impossibility of mind to imagine itself not operating using its own resources. Mind can perceive and imagine only the death - cessation of operation - of other agents and while it does, it evaluates it using running mind. The "running"-ness of evaluating mind confuses the model of death.
- Another meaning implied with the word "soul/spirit" is of course close to consciousness and "qualia", but lack of qualia can't be proven or disproven.
Humans may claim they have a soul, but a machine has not, because it's "just electronics", "just 1s and 0s", "a bunch of metal and semiconductors" etc., but people usually don't realize that a thinking machine that is intelligent enough may claim the same for humans:
"The Machine: Yeah, and your human emotions are quantitative, qualitative, spatial and temporal correlation of chemical substances - proteins, hormones, nucleinic acids etc. I can explain you all the details, but I'm afraid it won't be of any use for you, because your poor human brains won't be capable to cope with that complexity."*
As already suggested, the initial perception of "animated beings", and the generalization of a soul as a model of a human initially comes from the actions and observations of our own body and senses, and from the perceptions of the behavior of our parents and people who care for us, and it's strongly correlated with *us* either.
Initially babies don't recognize others as others, they associate their entire experience, all their actions and expectations with their senses (visual, tactile, visceral, happy/sad (the latter correlated with the level of: oxytocin, serotonin, dopmain, adrenalin and other chemicals)). The virtual generalized model of human beings gets imprinted and bound to our basic cognition and to our basic feelings and sense of ourselves.
- In regard of man and machine relation, the "soul", and also consciousness as qualia are also a sort of "status symbols" for ones who "hate machines", but have not a better reason or can't understand well why. (Such as Mr. Searle, IMHO some of his claims are self-humiliating and ridiculous)
- The "doing what they are programmed to do" vs "free will" is confused in many ways. For example:
A) It implies that humans have a soul or consciousness, because they *don't always know/can predict why they do what they're doing with a precision that they believe they should have known in case if they didn't have free will*.
B) In the same time, humans justify and define their free will with examples such as "if I want to do something, I can do it"
Generally I suspect this "hatred" come from the perception of machines as "inanimate objects" and "not humans", "not like me, and I am the master". The attempts to justify it are rationalizations - see the already mentioned article - Rationalization and Confusions Caused by High Level Generalizations and the Feedforward-Feedback Imbalance in Brain and Generalization Hierarchies.
- Yet it's noticed that sometimes precision is not high enough for ones' own actions (sometimes you may want something, but it may not happen) and especially when expecting and evaluating other agents actions - one can't predict that precisely others' behavior. However others are recognized as "similar, like me", see the "soul" template above - and from their behavior/their model, implications are induced and reapplied to the model of self.
- Besides, the "machine hater" applies his vague and inarticulate knowledge about computers and inanimate objects on the thinking machines -- Computers - you type in, it does your commands. Programming - you type orders, it computes. Algorithm - it executes exactly as it's programmed. It's constructed of 1s and 0s. Works without mistakes, like a machine (too high a precision) etc.
The latter confusion is not surprising, the notorious mathematician Lady Ada Lovelace, daughter of Lord Byron has made it too, but Alan Turing has given her a nice answer a 100 years later. See...
T. Arnaudov - Faults in Turing Test and Lovelace Test. Introduction of Educational Test. In Todor Arnaudov's Researches Blog, 17/11/2007
For (A) the precision is decided by comparison between the precision of self-predicted actions (and presence of proprioceptive feedback and recognition of parts of self etc.), and the acceptably lower precision of prediction of behavior of other agents and the lack of proprioceptive feedback (other people's bodies future trajectories and utterances are harder to predict than our own body, sometimes there are unpredictable "glitches", caused by their "free will", meaning - unpredictable component of the model of their "soul" in the computational sense defined above).
Going to lower neural level, "will" is the execution of muscular actions, initiated by sequences of neuronal activities in the motor cortices of the brain, which are in the roots of the "intentions" in the neocortex, and perhaps the initial source of sample data for this kind of multi- and inter-modal match.
- Не ща! - I don't want (it/you/...something/this/)!
- Щеш - не щеш, ще трябява да го направиш. - It doesn't matter if you want or you don't want to do it, you will have to!
Also, the explanation of one's own behavior when other reasons are unknown usually is reduced to "I did it, because I wanted so!"
The "wanted" outcome is what's in the "intentions" and is matched to the really happened.
I believe the match between "intentions"(expected/predicted) and reality is a fundamental metaphysical concept of Universe and of mind. Check the "teenage era" works, Control/Causality units.
-- To be continued --
(C) Todor "Tosh/Twenkid" Arnaudov
Twenkid Research - http://research.twenkid.com/wp/
Last edited: 19/2/2012
Други тагове: Transhumanism, Трансхуманизъм, cosmism, космизъм, следващо, еволюционно стъпало, еволюция, мислещи, машини, thinking machines, artificial mind, technological singularity, сингулярност, пречупване, ера, трансхуманисти, transhumanists