For the handful of people who may care or wonder - I am up and running, just doing it in "stealth" mode.
Wednesday, November 12, 2014
Thursday, August 7, 2014
Abstract evolution - cybernetic, meta, cosmism. Turing machine and the Pure Apriori Conceptions. Emulation of universes and minds. Thinking machine vs Turing Machine - and much more... Continuation of the thread on G+. To be continued...
[Continues, see the previous posts]
Randall Lee Reetz
Randall* (see the note in the end), one could just say: prediction, compression (shorter description for functionally equivallent items - it goes together with the former), preservation - however one should add: "of what", for example of the "conserved cores" - see below. Also one should add increment of the range of prediction - higher precision, higher spatio-temporal range, higher certainty (see in the references as well). And the capacity for prediction, compression and preservation are encapsulated in spatio-temporal areas which grow both bigger and smaller - expanding resolution both in micro and in macro scale, and the encapsulated "areas" (sub-universes) get/aim at getting more and more independent from the rest of the Universe. They create higher forms of physical laws (causality), built of sequences/systems of the lower ones which predict correctly, and they get ever more aware of them and certain in their execution (that's the improvement of prediction/compression, and increase of the quantity of "real causality" - that is one that causes in the external to the subuniverse lower Universe with the highest possible resolution of causality and perception, in terms of the target, lower level Universe). Etc.
All of these need elaboration and grounding, in order to be more than simple observations and claims, though, for example - see below.
That subject was even taught/presented in the first AGI university courses ever (known to have existed), 3-4 years ago - of which I was the author.
There is the synopsys, the course program and the opinion of Ben Goertzel: http://artificial-mind.blogspot.com/2011/03/mathematical-theory-of-intelligence.html
There are slides (but mostly in Bulgarian: http://research.twenkid.com/agi_2011/
Vladimir Turchin on the Meta-System Transition - cybernetic view on the evolution: http://pespmc1.vub.ac.be/POSBOOK.html
Boris Kazachenko: http://meta-evolution.blogspot.com/
Todor Arnaudov: Theory of Mind and Universe ["teenage" is the time when it was conceived and first published by the author],
start with the introductory definitions about "Universe", "subuniverse".
Short presentation of some principles of the evolution of the Universe and intelligence in a lecture:
More theory and definitions:
Some of the original works:
Discussion on "Entropica" and general intelligence:
Various; e.g. a discussion on ethical issues of transhumanism
I suggest also some recent notes on the Meta-Evolution regarding the conflict between preservation and progress and the materialized form of these processes in the bodies of male/female organisms of humans: (In Bulgarian, though)
That's it, male organisms are materialization of the "progress" arrow ("evolution"), female - of the preservation of the existing, the behavioral differences suggest it strongly, as well as the "settings" (males genome has evolved the basic female - the Y chromosome). Due to the various levels of "objectivation of the Will" (levels of evolution of the matter), it's not "flat".
On the "Definition of Machine Intelligence" by Marcus Hutter and Shane Legg (that's regarding the "bandwidth" problem as well) - slides taught in the AGI courses and the paper:
See also specifically the AIXI, their model for "Universal AI".
Well, the whole so called "dialectical materialism" is about abstract sense of the evolution, or as Engels/Lenin would probably say:
"Higher forms of motion of matter", where the abstract sense of "motion" is "dynamics", "change".
In general, you should define your terms distinctly and clearly, something that you don't, and you should understand and recognize common concepts in other participants or thinkers - Schopenhauers's "The Will as World and Idea" is also a theory about the abstract evolution and the production of higher forms of "objectivation" of the Will , where his concept "Will" includes aspects which are discussed below in the references.)
If you were not ignorant/lacking curiosity, you would have known, that Will is a special concept which refers to higher forms of Causality, the "On the fourfold root of the principle..." is also about that (1813). In my theory humans or all Causality/Control units, or Virtual universes at different levels - they are all higher forms of causality as well.
The "Pure apriori conceptions" of Kant are Time, Space and Causality - they map to Turing Machine's/Random Access Machines clock generator, memory and instruction set.
One important aspect is, though, that the Pure apriori conceptions are *empty* of content. They are not *empirical* (that's one other reason why there's nothing "scientific" in your claims; I saw you cited Goedel - logic and axiomatic mathematics is not empirical).
Instruction set should be defined precisely, there also must be specific material - configuration of the machine memory, otherwise it's dead.
An empty Turing machine does nothing - it is the SOFTWARE that's interesting and that has enough of material for further analysis.
Here's one of your other confusions - you "summarized" my first post so naively and wrongly - like if you were a dumb "keyword recognizer" - you noticed "emulation", therefore "Turing machine"...
However, in order to emulate even just one computer with another one, one Turing-complete device with another one (not a sensory-motor thinking machine, a versatile limitless self-improver, an AGI), you have to have a lot of other things which are much more complex (in any measure - having more structural complexity and depth) and more intersting that the Turing-completeness alone:
-- The software of the first
-- An EMULATOR
-- Have to transfer the software of the first to the second
A Turing machine alone, without appropriate software and other means cannot write this emulator.
So I'm talking about the capacity to write the emulator, which requires capability to investigate and understand the other "Turing machines" - causality forces, sensory-motor data. That is, I'm talking not only about the "execution of instruction", but about understanding.
And yes, you could run AVX2 code on a 6502, but you first have to write the emulator, among the other requirements and the sufficiency of memory.
An empty PC or an empty Apple][ cannot do it, and if it does understand the instruction sets and knows how to write the emulator and make the mapping, it's obviously much more than just mindlessly "Turing complete".
Accordingly, my theory refers not to "Turing machines", but to Virtual Universes, to Cuasality-Control units, resolution of causality and resolution of perception; aims at defining causation in operationalized and quantifiable terms - for instance true/complete causation and virtual/conditional causation.
It talks about universal recognizers and simulators of virtual universes - that's related to what others try to model with Hierarchical HMM or with their AIXI/UAI, algorithmic probability; etc. etc.
(No, "subuniverse" in my terms is not "local minimum" - it has more structure. "Local maximum/minimum", in Calculus sense, is used however as an aim for the Causality-Control units; the different ones have conflictint aims, coordinates of maximum reward, that's why maximum for one is a minimum for another one and it shouldn't be defined without depth. You should refer to the provided works.
See also: http://artificial-mind.blogspot.com/search?q=analysis+of+the+meaning
Part 1 (и български):(This Post) Semantic analysis of a sentence. Reflections about the meaning of the meaning and the Artificial Intelligence
Part 2 (и български): Causes and reasons for human actions. Searching for causes. Whether higher or lower levels control. Control Units. Reinforcement learning.
Part 3 (и български): Motivation is dependent on local and specific stimuli, not general ones. Pleasure and displeasure as goal-state indicators. Reinforcement learning.
Part 4 : Intelligence: search for the biggest cumulative reward for a given period ahead, based on given model of the rewards. Reinforcement learning.
//// Furthermore the published works I cite are quite old and introductory, they need elaboration and more operationalization.)
Overall again: a thinking machine, an AGI, a Versatile limitless self-improver (VLSI) is much more than a mere Turing Machine - see more elaboration of the difference in the previous posts.
Now a huge difference that arises is the following:
An "empty", "reset" Turing machine does nothing. Void. It's in "HALT" state. Dead.
On the other hand, a mind or an "empty" VLSI is never in "HALT" state.
Human mind can't even imaging being in a "halt" state - it's just unconscious one, like if it didn't exist.
If a mind of a Versatile limitless self-improver doesn't have material, it starts to synthesize some and to scan and search for structure in the environment - just like babies, if put in dark, start to move their eyes and head around in a search for light/contrast difference - something to catch the attention and to allow further investigation, recognition, memorization, generalization, prediction, acting upon/with, etc....
The Turing machine alone neither searches for anything, nor it knows anything.
The same applies for the Universe as a whole - it just does, it's the "blind Will", causal forces being applied.
And that's related to the thought I cited in the first comment:
"Where calculation begins, comprehension ceases"
Which refers also to the "Symbol grounding", to "Chinese room experiment", the discussion regarding behavior in "On intelligence" by Jeff Hawkins.
It refers also to the common lack of understanding of the concept of "grounding" (explained in the above posts of mine) from many people in the classical AI/programmers/logicians.
You don't seem to get it, even though I notice that you mentioned "building ontologies" and generalization, which is correct (and "ontology" is yet another philosophical term, in your "scientific" speech; your post about the "what of, what is" is poor/small scale philosophy as well).
A bit more of an expansion:
The capability to map correctly specific experiences/other input to known concepts/generalizations is called "Facculty of judgement".
It's related also to the so called "principle of homogenity and principle of specification" - generalization and specification.
It means, that you should generalize what's similar/common, but you should also accordingly discriminate and classify the differences in seperate and distinct classes.
If you do not do correctly, there will be confusions.
That's why you were criticized for the use of "evolution" with no specifications. That term alone was used and initially defined for *biological evolution*, which has its specifics.
Normally, the abstract or non-biological evolution has been called "Progress" - that's before Darwin, and it's not a new discovery or a concept - it was obvious in the advanced minds even in the early 19-th century (industrial revolution gave insights).
As of the more distinct elaborations on cybernetic evolution or the abstract evolution or meta-evolution or meta-system transition - see above.
That's "scientific" material - grounded claims, containing evidence, examples, definitions of concepts, references, trying to make predictions - so long as this topic is "science", because as "meta-" and "abstract" suggest - it's also philosophy - speculative and more abstract/general than science, inter-disciplinary, a super science. Just like "metaphysics" or "meta-programming".
Indeed your confused "parable" is sophistry, I missed the "scientific" part in your comments.
For example you share no *evidence*, just undefined and general dogmatic "claims", usually empty of content (like the Pure apriori conceptions).
BTW, If you try to refer to Ptolemy - he was actually a honest and convincing scientist (empirical), and he obviously has used empirical evidence.
At the time of conception Ptolemy's explanation was pretty persuasive - and no, it was before Christianity. The ancient world did have science and method, even the pre-school children, 4-5 year old are empirical scientists. Just their experience (sample data) and bandwidth is limited, so their hypothesis and theories are as credible and complicated as that allows.
See for example Jean Piaget dialogue with a child regarding what makes the wind blow. It's cited in the Developmental Psychology lecture in the course:
There it is for completeness:
Piaget: Who makes the wind blow?
Julia: The trees.
P: How do you know?
J: I saw them shaking their branches.
P: How that makes the wind blowing?
J (shakes her hends in front of her face): Like that. However they are bigger and there are a lot of trees.
P: Who makes the wind in the oceans?
J: It blows from the Earth. No. From the waves...
Even Piaget didn't get this right, he takes for an important aspect "Animism" - because the child sees the trees and nature as being impersonated, having intentions etc. - which in fact is not that wrong for two reasons:
1. There is no evidence/alternative theory at the time,
2. As many abstract thinkers agree, the Progress creates higher forms of causal laws thus human will/intentions and the lower forms of causality are kindred and members of a common class (see "On the fourfold root...")
Thus, to me there's something else that's more interesting: it's that the child does have *EMPIRICAL* facts collected by *EXPERIMENTS*, and then she makes inferences based on these empirical observations, which are correct and consistent.
-- Shaking hands produces wind (experimental settings, experiment, results)
-- The feelling of shaking hands in front of one's face is similar to the feeling when there's wind outside
-- The force of the wind can be quantified/measured and it depends on the magnitude of the surface being shook and the speed of shaking
On the next generalization:
---> Motion produces wind
* Branches of the trees with the leaves shake.
* Waves in the ocean move
* Bigger/wider objects cause stronger wind - as the wind in nature is stronger than at home
Therefore the trees and the waves produce the wind.
Respectively, if one looks at the *EMPIRICAL* data available to Ptolemy and the numbers/quantities known/in use by that time, one would see that he was a good scientists and didn't make wild guesses.
As of the process of discovering something beyond that - so that the other scientists (limited empirical observers with worse generalization capacities) can't be convinced with evidence...
The above requires a higher abstraction, to go beyond the competing thinkers, a wider scope.
For example, the one who proposed that stars are probably far away, they are other suns with other planets and other living beings was classified as "philosopher" - was Giordano Bruno who died for it, killed by dogmatic idiotic apes.
And as it's always with the most advanced and the best minds - he was interdisciplinary, he was also a mathematician, astronomer (he extended Kopernik's model), a genius, unlike pedantic calculating machine.
Philosophy is the most general/abstract/speculative/meta- view on the rest of the knowlege and theory.
Philosophy appears always after science - and your parables are nonsenses, don't mistake religious nonsense with philosophy, and don't use it as well... :)) That's why you can't provide ANY specific evidence, any name of a philosopher or a work, or a page in a treatise; or any historical fact that prove your nonsense - compare to the writings of the "philosopher".
To name something "philosophy", it must be a higher generalization on something else, it needs material to generalize on, which is provided by empirical data, and as shown with example with the child - even 5 year-old has some form of a method for testing hypothesis, actually even a new-born has.
The only "sciences" without empirical data are such as Formal logic; some consider also pure mathematics - when it's based on purely axiomatic systems, that do not understand/relate/care/know their grounding. Mindless calculations which loses detail and ground - which for instance is the reason for the silly "logical paradoxes" (such as Goedel "mighty" theorem, proving an obivious - nature is not "formal logic". Some "scientists" still believe so, though.
However "nature never lies", Universe is always "correct" and it always "says the truth". If you find a paradox or can't predict/map/... it correctly/completely - it's your model/theory, not nature.
* As of the addressing to Randall - the purpose of this/previous posts is of course the (truth, depth, coverage, understanding, explanation, clarification, grounding, ...) of the subject matter. He's only an "instrument of the Will", "a soldier of fortune" as he himself notices about humans in the path of Progress (humans in the specific biological substrate) - a step in the whole process.
These writings with some more unpublished expansions and details, and in a better formatted presentation would go in a complete work.
I am aware that the specific "soldier of fortune" to whom formally the message was addressed (it's just formally, let me emphasize) does not have enough of "RAM, CPU" or attention span to grasp such elaborate studies as the above, so please let him do not believe that he's that special.
The topic is that deserves the attention, and some the peculiarities and confusions of minds like his are worth being explained.
Tuesday, August 5, 2014
The Super Science of Philosophy and Some Confusions About it - continuation of the discussion on the "Strong Artificial Intelligence" thread at G+
On : ...
See the recent previous blog-posts from which it continues: ...
//By the way, I agree with some of your claims (but they should be elaborated with examples - grounds of reason, something that you do not do, unlike me - that's the purpose of the "many words", to build up images and context.)
So I actually do have "a theory" of intelligence and universe, which are going together, published prior to Hawkins' book, the trendy "Singularity" PR, the term "AGI", the "Deep Learning" popularity.
And one additional reason for people not understanding each other is our ape-character - social ranking. That's one reason why a few people would bother to check what the other has written, his theory etc., that is above comments of a few lines, unless he displays high social ranking.
One would read Kurzweil, or Hawkins, but not Todor Arnaudov - my theory is in some aspects "kindred " to Hawkins, however published prior to Hawkins' - who cares. There are even worse cases - see below.
"Randall Lee Reetz
There is a big difference between philosophy and science. Philosophy only respects the thoughts we like to think. It's a mirror on self interest.
Randall Lee Reetz
Science is what we have had to invent and work at because of philosophy's obvious blind spots."
Sorry, the above shows that you don't have a clue about philosophy, especially the rigorous one. The "mirror of self interest" that's exactly the opposite of the systematic philosophy - it aims at being as objective as possible; even the term "objective" is used for "detaching from the Will" (in Schopenhauer terms, that process is related to Brahmanism and Buddhism terms, "losing yourself");
the motive to be as detached from the Will as possible.
The right philosophy aims for grounded, explained, starting from most basic and provable (as long as its possible) grounds - start with the dissertation "On the fourfold root..." which is exactly about the grounding of the truths as matching to the reality; as I mentioned earlier - 180 years prior a known as a seminal paper about that in the official "science" of AI. The line of Marx, Engels and Lenin is also about scientific method of philosophy and a tight correlation of philosophy and science (see especially Engels), unfortunately it's "polluted" with politics/ideology.
Finally "we" who first invented and worked at science methodologically were namely the philosophers... From the Greek ones to the Renaissaince ones. Of course practice and theory, and philosophy and science go and should go hand by hand, that's another word/way of talking about "sensory-motor grounding", related to the "Facculty of understanding" (Kant, Schopenhauer), the mapping between higher abstractions, lower abstractions and the lowest level data that's empty of intrinsic meaning. Good philosophy is on the top of the sciences.
As of the "many words" of mine - that's one of the problems of understanding - incompatibly different bandwidths. It's not only about time, it's about the size of the buffers at the lower levels - working memory capacity, the Facculty of judgment/top-bottom connection within the cognitive hierarchy; differences in the capacity/access to lower level sensory data in various modalities; capacity to imagine/trace the visualization/materialization of the words into images; and of course - simple knowledge when if missing, and also the attention span in time. (See the other comments for more details)
Philosophy is about higher generalization, higher cognitive span - it's steps above science in generality and scope, respectively it's harder to be grasped or held in mind by some scientists/engineers, whose subjects normally require shorter/smaller span - "out of memory"/"lacking grounding data"/"insufficient transitory-buffer-capacity" (see my other comment and the answer after it's published), and it requires to know to what the abstractions refer, so it's supposed that you know the concepts and "mechanics" of the special sciences as well.
The good philosophers are also scientists and engineers and artists in one way or another - you should understand the special sciences/domains and search for the general between them. If you are specialist only in one field (or a few) you can't notice or care about the association to the others, the causal chains between them, that your field is in fact the same as some other fields, how your own field came to existence and why is different, etc. The data to make this inference is missing. Most people suffer from multi-interdisciplinary blindness and multi-modal learning limitations.
That's why many people ask questions whose answer is otherwise obvious - they however scan the world with a spotlight in the darkness, instead of having a sun to enlighten the whole view at once, and if there's a lack of memory to keep the track while scanning - that might be a long journey of trial-and-error within the darkness, until reaching the obvious.
Regarding your claim - there are philosophers like that - it is the sophistry, perhaps some of the subjective idealists, perhaps also some servants of some ideological needs or just "immitators" - they mirror what the audience would like to hear, this is often blah-blah-sophy, not quality philosophy. Kant and Schopenhauer for instance were definitive AGI researchers, aiming at understanding intelligence and creativety completely, as much as the means and knowledge at the time allowed; unlike almost all of the official so called AI researchers for the most of the history, who were mere programmers or logicians or engineers or mathematicians, or combination of some of the above - but were missing the grounding "glue". I would name a few who did have a clue about the glue: Vladimir Turchin and Alan Kay.
So what's your knowledge or rather *understanding* of philosophy?
In fact the modern scientifical methods were first understood, proposed and formalized by philosophers - start with that fact; they knew better than the "scientists" what one had to do; many of the typical scientists are rather "pedants", performers of what's prescribed (initially by an authority - the "biological" method of the social ranking of the apes which humans are); most people are like that, and that's why science in Europe was dead for a millenium in the middle ages.
It was dead due to bad philosophy and because, let's call them "scientists" (empiricists), the more "practical ones" couldn't make up a way to understand the facts given the low resolution high generality data they started with and due to too much of obedience to the bullshit of the autorities (and perhaps the more limited amount of working memory, compared to the quality philosophers who then came).
"Angels" and "deamons" ruled the world - it was empirically proven - if you were bad, the sacred forces of Good moved you to the stake and you got burned in order to save your soul! People were bad and sinful, they didn't follow the Commandments, that's why they got ill and died! Who ever needed a better explanation - it was proven empirically.
Then science was revived not without the impulse of good philosophers, who are not pedants, they broke the existing fake dogmas, started to break the obedience to the authority and started to apply and suggest rigorous methods, instead of sophistry and pleasing or serving the religious authorities: Francis Beacon had important works on the inductive method in science, Descarthes paved the way of Calculus and started the idealists school of thought, which reached to Kant and Schopenhauer - their philosophy is a theory of how to build an AGI, so long as you understand it and have enough of working memory to keep their sentences in mind.
Indeed, I would claim that Kant was also an abstract mathematician. "Turing machine" is some 150 years late to Kant's true early definition of a computer - namely the Kant's emphasizing of the appropri conceptions of time, space and causality. This is the most abstract definition of a programmable computer: it needs a clock generator ("time") - there must be changes, and at the lowest level they should be expressible in 1D (the lowest possible dimension, the simplest hardware); it needs memory (state, "matter") and causal laws for the changes to happen in predictable manner, rather than having a random number generator* - that's the computer architecture, the instruction set, the most basic "physical laws", upon which higher forms are built.
* in fact random numbers are also not "random" and follow the law of their distribution; the probabilistic laws are also laws
Schopenhauer extended and made it even more clear - and that's all of the above: a Universe model, a model of a computer, a model of simulation of scientific simulations.
Also, just to mention regarding the talk on evolution here - his otherwise philosophical theory is about the Evolution, which in the beginning was such (that is: more abstract/general/higher science than the direct, purely empirical zoology). He discuss about the evolution (development, emergence, "generatio aequivoca") of the live and of the Universe and mind - and is several decades prior to Darwin - "the scientist".
- He induces his theory from the sciences, strongly refers and supports his claims with biology - zoology and botany - and all other scientific knowledge, available at the time;
- he defines the "anthropic principle" (not using the term);
- the "struggle" between the species, the crude forces of nature (the Will) that care more for the survival of the genus rather than the species and even less for the individual beings - however the individual objectivation of the Will, the individuals, struggling for their own surviving and fighting with the other forms of the Will;
- the fitness of the species to their surroundings and that they are mutually correlated and by the peculiarities of the environment one may induce what kind of animals would live there, and that in different places with similar climate and conditions live similar species, even though they were not exactly the same - because the evolution have lead the Will through similar obstacles and they had to survive in similar conditions);
- respectively that infers the survival of the fittest;
- the humans as higher than animals only in the extent and of the level of "the objectivation of the will", but qualitatively the same (something radical for 19-th century, where the egoistical humans were trained by the Western religions that animals "didn't have a soul");
- that the apes and other of the smartest animals are just one step below humans, they have similar "Understanding", but don't have "Reason";
- that the first human was probably born from the womb of an ape (i.e. not a human);
- that the first humans should have been black-skinned or dark, not white.
Let's visit also Marx. He is the proto-father of systematic Sociology, modern Economix; his thought, as with Kant, Schopenhauer and Nietsche is also related to Systems sciences, Cybernetics and together with Engels and Lenin they constantly searched and displayed correlations between the more abstract and the more specific (where honestly, where ideologically).
Notice also one other detail. Only Kant among the above was an academical, but as Schopenhauer mentions - he was an exception, because he lived in time of an enlightened monarch, who was a philosopher as well.
All the others were dissidents out of the universities, "non scientific"/according to the adopted values, some of them were viewed as "cranks" - Schopenhauer or "insane" - Nietsche.
That's it with the "bad" and "self-interested" philosophy, which is in fact a higher level of science and a "spawner" of other special sciences, because it sees more of the landscape**, which allows it to understand, explain and forsee things that normal scientists notice or "prove" decades or centuries later with their pocket spotlights and the smaller scope of view. The latter are driven more of the "blind will", the evolution that doesn't understands its aims and just "works". If they had a clue and understood the "bad" and "non-objective" philosophy they may have made the same "new" discoveries much earlier.
** Multi and interdisciplinary researchers are kind of philosophers as well, they are "meta-scientists". Theoretical physics is also close to philosophy, it's speculative and the most general/abstract of its "sisters".
Working memory capacity, traces of thoughts ... - continuation of the thread on G+ "Strong Artificial Intelligence" [Continues]
Todor Arnaudov's answer to Randall Lee Reetz at ...
No theory at all? I don't think so. I'm sorry if some of the words/terms sound "embarassing" - it's not "politically correct", but are facts called with true names. That's just measures of intelligence, emotions is what humans often see as the most important though and lose the rest (not the method, but your "existential ..." if I rephrase you).
There's rather no theory in the short statements - where is it in 10 words (what's their purpose at all), it suggests super short attention span - first of all. These ones don't have internal "physics" and structure, to name one, and scarcely refer to anything specific, or it's just a term or two or some general claim.
My comment refered to a bunch of specific concepts and works (I could extend it), and there is even inter-sentence structure/comparisons - for example the difference between the span of the intelligence which is one of the reasons for misunderstanding/impossibility for understanding - Einstein vs Schumacher.
As of the "too many words" - one of the reasons why some people don't understand each other. Incompatible bandwitdhs limitations/impatience/grounding/experience/..., another implied by the above is that they can't follow each-other thoughts.
Turing machine is too simple a model, it's "exhausted" of material and obvious at a glance, - more structure is needed, such as:
Randall: "a small intelligence advantage. - ... the question is why?"
The comment gives some answers, which at a level of verbal short expressions (...) is what one can get.
So it's about:
- Working memory capacity (in narrow cognitive science sense that's the 7+-2 thing, and in more general sense that's the span of the focus, the amount of data one can process in a chunk/a step/a moment, or can hold for longer for processing without refering to external sources);
- The exent of the connections between the levels of cognitive hierarchy, the lossyness/how much is lost between the levels; the extent of assymetry of bottom-up to top-bottom; the availability of access to the different levels of abstraction of sensory records from the different sensory modalities (one who doesn't recognize musical tones cannot understand a melody or a chord or a symphony as a composer who does);
These connections are related to the defined more than 200 years ago hierarchy of Reason(concepts, most abstract, most general, executive function; serial, small capacity)--Facculty of judgement-Understanding(simulation of the causal forces of the Universe)-Input (external and internal).
All of the above is related to the capability of grounding and understanding of grounding, how much one operates with a model of the Universe or just doing blind calculations without understanding;
-- Facculty for imagination, related to "Visuo-spatial sketchapad" capacity, fantasy; that's connected with the WM capacity, the visual hierarchy and grounding as well.
">>a small intelligence advantage. ... the question is why?"
First of all, it may be not small, but "on/off" - such as missing connections/"facculties" to learn in the appropriate sensory-motor data chain, making you incapable to understand operationally for example painting or dancing or playing the piano. Humans possess "general intelligence", but only humanity as a whole with all the technologies has "versatile intelligence", or as I call it: Versatile limitless self-improvement capability. Individuals do not have truly general (versatile) intelligence, they are bottlenecks somewhere in the brain that make humans unable to learn in some of the modalities or inter-modally beyond most mundane levels, compared to the talented ones.
Second - due to the hierarchical/heterarchical nature of brain/mind, and also the working memory limitations, there might be "fire walls", "valves" where conceptions cease to grow - for example the mind cannot hold all of the required samples at the same time in order to see the causal relation.
Third - you're right that time matters, that's one of the dimensions of the scope/span of the attention, both in narrow term related to consciousness and in wider - a topic that you may investigate and revisit for a year, a decade, for life, and to see from new points of view over and over again. This is related to non-cognitive forces as well such as lower biological needs; to distractability, to the development of the mielin in your brain, to your life situation.
However time doesn't help when there are "interrupted" connections (such as sensory modalities data) or insufficient working memory, which in the human brain is not unlimited.
For example, one reason why some people cannot understand Kant thoughts is that they couldn't keep in mind some of the sentences, or just keeping their vocal representation in the loop overloads their mind so they can't think also about the implications, or see the connection to the following sentences or the past ones.
Again, Schopenhauer has actually talked about that issue in the chapter for the "Essential limitations of the intellect" in "The world as Will and Idea", volume two, ch. 15 - that the highest thoughts/consciousness/Reason is in time, sequential and in order to think of something else, the previous thought should be removed, and only some traces should left, that allow to keep the trace, the direction, the trend, and the rest should go deeper. If one doesn't have enough resources to keep the traces alive, to "bear in mind", she would lose the point and would see the thoughts/sentences/concepts as "unrelated".
If that is applied for a long chain of operations and if the results of the operations can be output and stored outside so that they doesn't need to be born in mind anymore and can run on their own without human understanding (technology, machines) or can be encoded in a cheaper and faster memory such as the premotor cortices/cerebellum as motor programs available on demand without loading the precious consciousness resources) - then a huge abyss grow between people and apes, and between humans who're talented/trained and the rest.
Also there are fields where you can go deeper while still keeping the "trace/connection" foot-print low - that's in the highly formal logical chains where the truth of the rest is not questioned, or if you do calculations. Eventually it has to be mapped/connected to some grounds, like the axioms in Euclidean mathematics and the initial conditions of the problem, which are considered "obvious/proven".
So as long as you have enough of memory to bear in mind, you can run for centuries and produce new results; if you don't - you're left as an ape and the gap grows. Some apes or monkeys (don't remember the species) for example would warm themselves if they found a fire, but they would not throw pieces of wood in the fire to keep it going. Also, apes are known to use tools, but not to use tools for creation of new tools - insufficient resources. Similarly as the children grow, their capacity to bear in mind items grow, as the lenght and the complexity of the sentences they can produce or comprehend.
As of the citation from Einstein - let me send it in the previous century. In "Paregra and Paralipomena" (if I'm not mistaken), S. emphasizes the fact that the ordinary people have "very short thougts", in other works also he mentions that the difference between a genius and a "blockhead" might have an endless amount of intermediate steps, but in essence it's only quantitative and in the extent, the ingenious ones see the world more distinctly and clearly and are able to focus/concentrate all of their mental energy in one spot, they can be "objective", detached from the "Will", the biological "self interest" such as social ranking/status, money, sex etc. The average people cannot concentrate and think or analyze experience for the sake of it, they are too much concerned with their personal interest which distracts them and keep them for focusing.
As of some more "scientific" evidences for the working memory stuff:
Kyllonen, P., & Christal, R. (1990). Reasoning ability is (little more than) working memory capacity?! Intelligence, 14,
THE UBIQUITY OF MENTAL SPEED AND THE CENTRALITY OF WORKING MEMORY
Reply to Conway et al. on Jensen on Intelligence-g-Factor
A review of visual memory capacity: Beyond individual
items and toward structured representations
Visual working memory capacity: from psychophysics and neurobiology to individual differences
In the abstract of the above: "The capacity for simple visual features is highly correlated with cognitive ability."
Regarding time again - it's discussed in the theories of Hawkins (in On Intelligence), of Boris Kazachenko, of neuroscience - the time needed for the sensory input to go to the PFC - images shown for too short time like in the editing in TV and modern action cinema only "fly-through" mind and do not get critically analyzed or steadily remembered. See also the biophysics, researchers on the "ADD" of current society, which is caused largely by watching television, Indeed, the claim that it is a (legal) drug is made by "official" scientists, there are hypotheses that the rise of the drug-addictions in the 60-ies is due to the growing up of the first TV-breed generations. The fast-changing images promote novelty-seeking/dopamine "shortcuts" and addiction, and people become more susceptible to catch addiction from chemical drugs. If you do not believe that, see for example a summary of the research through the decades by the Romanian researcher: http://www.helikon.bg/books/153/-Телевизията-и-детето_153788.html.
My own theory refers the extent/level of understanding also to the amount of time (for deeper, superficial, so that's "real-time physicist" or theoretical physicist etc.). That's the resolution of perception and causation in the dimension of time, respectively related the level of generalization/the span of data records or the prediction period in the future and the levels of detail.
To reiterate something else on the "small advantage":
I believe "small" has to be defined better and more convincingly as a meaningful concept in order to make sense. In Chaos Theory they use to say that "a small difference in the initial conditions may lead to a big difference in the final state". "Small and big" are too definitive, but they are vague. For example in a textbook they once said "1 mm difference of the position of a sled, may lead to 60 m deviation at the end of a slope".
Big? It's just 60000 times the initial difference.
One may say the opposite: it's rather a small change, bearing in mind that the distance between two molecules is bigger than the difference between the initial and the end condition (in orders of magnitude), and that the position depends obviously on all the path and the obstacles and details that the sled has to encounter until arriving at the end - so it's not just the difference in the initial conditions (relative position from a previous run), it's the whole situation and the initial unknowness of the complete situation with appropriate/sufficient resolution of detail that leads to the apparently "big" difference.
Somebody having wrong/unclear/low resolution model getting excited about being unable to predict the results "as he thought he should have been able", instead of criticizing/correcting/enriching/... his model.
And yet another answer to that arrogant and short-memoried guy, insulting me, but giving food for a good post, enjoy! :D
Randall, I would rather suggest you do it. I had a written and published theory far more clear than yours (matching in some points) when I didn't even have a moustache. However yeah - I'm a far quicker typist than you, obviously...
[http://research.twenkid.com/agi_english/ - see the slides
As of some more "scientific" evidences for the working memory stuff, let me repost it to your insult:
Kyllonen, P., & Christal, R. (1990). Reasoning ability is (little more than) working memory capacity?! Intelligence, 14,
THE UBIQUITY OF MENTAL SPEED AND THE CENTRALITY OF WORKING MEMORY
Reply to Conway et al. on Jensen on Intelligence-g-Factor
A review of visual memory capacity: Beyond individual
items and toward structured representations
Visual working memory capacity: from psychophysics and neurobiology to individual differences
In the abstract of the above: "The capacity for simple visual features is highly correlated with cognitive ability."
You don't understand because you don't care or if you do perhaps have
too short working memory for that kind of presentation - one of the serious reasons why people don't understand/care for each other, explained in details in the posts, as well as in referring scientific publications.
You mistake "science" with "limited working memory" (to the extent that you accept).
Yes, science (as well as philosophy, as well as evolution of the Universe in some of its aspects) is about optimization, compression, shortening.
However there's a limit beyond which you start to lose detail and turn to too general, of which you cannot induce anything more , or lack grounding.
There's no cell made of one molecule.
Some of your claims - see them in my theory (for example) among others - published decade(s) or more ago, by a kid.
However they are too general and confused said that way - that's something that John points out.
The "evolution" should be defined more distinctly. For example I agree about the evolution of the Universe as a whole, however at the same time John is right that there are "sub-Universes" - the individuals whose goals are not always synchronized with the overall trend, not all the time.
The individual is a sub-universe aiming at his own goals. And within an individual, there are other subuniverses which also have conflicting goals and struggle to overcome the others.
Boris Kazachenko has concepts called "Conserved core" and "Adaptive interface", that is related to the concept of "Meta-System Transitions" of Vladimir Turchin. That's about evolution in more articulated form of expression, not in one sentence.
The overall results produce the evolution of life and Universe.
At a higher level of an individual there's the genus - which is also having its interests that are above the interests of the individual, but below those of the Evolution (of life) as a whole, or Evolution of universe as a whole (humans and technology destroying living forms, species and genus being eliminated from existence).
Your claims alone, even when agreed on, are not operational in that form.
You need to add more specific and "physical", that is causal, details to "run" it - something that needs more words than most people use to take in one "bite".
And one "philosopher" such as Schopehnauer had 3500-4000 pages worth of incremental and all-directional proves of one-single thought. You have - how much, a half page with words only.
Monday, August 4, 2014
Where calculations begin, comprehension ceases* - on understanding and superintelligent AGI. Todor's comment in "Strong Artificial Intelligence" at Google+
Comment on the thread: https://plus.google.com/u/0/+DuncanMurray_AU/posts/NeuVTZonghR?cfem=1
Yes, in too simple situations or ones where the possibilities for action are physically limited, by other unalterable superior forces, a superior intelligence is not obviously useful. A genius in an empty solid room with no doors, no windows, no holes, and no tools wouldn't invent better ways to escape than any normal guy or a cockroach.
Also - yeah, most people, having current biology, don't need technology beyond "baking a cake" (food, drinking, sex, entertaining (legal sensory-taken drugs such as TV), social approval) - people don't really care about technology, and hopefully there were ones who did, otherwise we would have still be dying in the caves.
Average people wouldn't see it as useful for the sake of its cleverness, it's not directly "practical" for an animal (sex, food, ...), it may actually be "anti-practical", because it will show humans that they are dumb, that "there's no need for human work, since it can be done/synthesized in 0.1 secs by one machine" - and many may go mad about that; many of their recent values will lose their significance and it will become obvious that they are not "magical".
I agree that one of the crucial fields for the AGI is biology. There are a few other important fields which will be such in the beginning, but they will do because they are easy and actually will be "anihilated" and "eaten up" immediately, since they are obvious and trivial, they don't even need enormous computing power for today's standards.
It is true also that the "normal" people usually don't need genii (including super smart AGI), and a genius, as explained by Schopenhauer some 200 years ago is usually useless for his contemporaries, because they can't understand him; they see him as a "crackpot", while a genius could see the other people as a bunch of silly "idiots" (and would be correct). As of the Matt's example about Einstein as a possible crackpot for the "most people" - do you bear in mind that most people have "learning disabilities", can't learn even basic Calculus, can't learn to draw, to juggle, to play musical instruments, to write a decent novel, can barely learn a foreign language or two - and would talk terribly. They are "retards" in most of the creative fields, if measured honestly and compared to the "talented" ones. Of course Einstein was a genius in theoretical physics, not an ingenius car driver, as Michael Schumacher is an ingenious car driver ("practical real-time physicist with high-speed sensory-motor physical processing, sensory-muscular reaction with a high precision, high-precision real-time trajectory prediction etc.), not an ingenious theoretical physicist (not real time, higher generality data, ...) - there's nothing hard in measuring or recognising this.
So why don't we turn the glass on the other side - from Einstein's point of view most of the people are idiots and retards, they can't see and cannot learn what was obvious and trivial to him - while he completely understood and could match their inferior understanding of theoretical physics (of which most people don't even have a clue, above some obvious elements of classical mechanics, without formalization).
If someone is capable to recreate/understand/explain/trace/unveil/expand/repeat... what you can do, if he is capable to emulate you; but you're not capable to emulate him - or if he does it faster, deeper, longer, more sustainable ... - he's better (smarter, more clever, superior, ...) than you in that domain. Of course for low level definition such a natural language one is too general.
In general, the above is related to the amount of working memory (and access for particular types of sensory motor memory), in a wider sense that the term from the cognitive science, which even in its narrow sense is proven and obviously correlated with the G-factor.
Most people cannot understand at all some topics, can't have a clue and can't learn them (besides to memorize some fragments , terms, concepts, operations etc. by heart that they can recite, perform, apply etc., but cannot continue, extend, optimize), namely because they are unable to put the problem in their mind.
The ones who do not understand see random unconnected fragments and cannot follow the thought process, besides on the "tracks" that are memorized as sequences to be "replayed". They can't see the links, do not understand the *PHYSICS* (the causality), the "intentionallity", the "philosophy", the idea (in Schopenhauer's sense), and can't create an intuitive view of the domain/question ("intuitive" in the sense of Kant and Schopenhauer ). So that's why they don't see anything meaningful or notice only "miracles", "radical novelty" or other nonsenses (like in art for the people who do not understand it and can't learn it) and the only way for them to pretend to understand such topics is to see some by-effects, some results - which however is not understanding of the topic, but "noticing" a detail, or an "approval" in other simpler and more trivial terms/domains/context); or if they follow straightforward logical chains "if-...then... therefore", possibly long, but be allowed to forget the past steps, so that the process is simple enough to have small amount of data within the single steps. At the end, they could reach at the conclusion and check that it was "correct", but otherwise they could not see the whole picture at once, and that's not real understanding/comprehension.
As Schopenhauer has said some 200 years ago for humans for such cases: "Where calculations begin, comprehension ceases" - in "On the fourfold root of the principle of sufficient reason", which ineed is one of the true early seminal AGI monographs, among Kant's critiques and all of the Schopenhauer's works. That was some 180 years before the often cited "seminal paper" of "Symbol Grounding" (1990) and 170 before the Searle's pathetic AGI "disproval" with his "Chinese Room experiment" (1980). Humans suffer from the "Chinese Room" or more properly - Schopenhauer's "words instead of thoughts" issue themselves, they did suffer very hardly centuries ago (in the philosophical sophisms). Reciting memories, reordering words without taking care of their meaning and connections, without holding in mind these connections; doing blind calculations, without understanding why, what, for what etc. - that's what most people do anyway. And that is not understanding. In one point of view, even the brightest genius doesn't really understand the technology - he cannot hold the processing inside it in his mind even in the slightest detail, besides extremely tiny or general fragments at a time.
We would not be able to understand thoroughly and "really" the thought process of an AGI that is too much smarter than us, as well. Only the principles or fragmentary and it may look like "magic".
However as of such "general" understanding of principles, even laymen "understand"/are capable to *say* something about some of the most general and easy to be uttered principles behind computer science - "digital", "1s and 0s", "flip-flops", "has memory cells where you store data", "it does what you tell it to do" etc., while at the same time he may be unable to code even a trivial system of 40 lines of code or understand how a simple adder etc. really works. So one may question whether he does understand what he's talking about, or he's just a "talking machine", an advanced "speech synthesizer".
*"Where calculation begins, comprehension ceases" - a Schopenhauer's thought from "On the fourfold root of the principle of sufficient reason", year 1813 (?+editions). From the English translation at https://archive.org/stream/onthefourfoldroo00schouoft/onthefourfoldroo00schouoft_djvu.txt
Translation in English by William T Fee, 1903.
Also: "Calculation conveys no comprehension ... [it is] of merely practical, not theoretical value ... ... deals exclusively with abstract concepttions of magniude, whose mutual relations ..."
To calculate therefore, is not to understand, and,
in itself, calculation conveys no comprehension of things.
Calculation deals exclusively with abstract conceptions of
magnitudes, whose mutual relations it determines. By it
we never attain the slightest comprehension of a physical
process, for this requires intuitive comprehension of
space-relations, by means of which causes take effect.
FIKST CLASS OF OBJECTS FOR THE SUBJECT. 91
Calculations have merely practical, not theoretical, value.
It may even be said that where calculation begins, compre-
hension ceases ; for a brain occupied with numbers is, as
long as it calculates, entirely estranged from the causal
connection in physical processes, being engrossed in purely
abstract, numerical conceptions. The result, however, only
shows us how much, never what.
And if Neo-Spinozans (Schellingites, Hegelians,
&c.), with whom words are wont to pass for thoughts
Germans are accustomed to content themselves
with words instead of thoughts. Do we not train them
to it from their cradle? Only look at Hegelianism!
What is it but empty, hollow, nauseous twaddle!
Arthur Schopenhauer, 1813 (Bold - T.A.)
Monday, June 9, 2014
On the "Passing of the Turing Test" and the related Emotional and Social-Ranking-Authorities nonsenses
Regarding the recent news and the wrong tone, interpretation, focus:
http://longbets.org/1/ (notice the "emotions" and how they talk about them as something "mystical", while they are the dumbest the most obvious and not mystical)
See a commentary of mine (in the context of AGI list) or below:
See Also Ben Goertzel's comments in H+: http://hplusmagazine.com/2014/06/09/what-does-chatbot-eugene-goostmans-success-on-the-turing-test-mean/
Published also in the AGIRI list:
Todor Arnaudov's comment on the reported "passing" of the Turing test
Turing test is superficial, based on wrong "social" settings, and somewhat a "lottery".
This is societal nonsense, a reflection of what most people care about - social ranking, social ranking and at last - social ranking. Well, also about "emotions" - another among the structurally dumbest things, which is present in all kinds of living beings all the time, and even in the youngest ones, and is expressed in a few bits, thus it is the most platitudinous/banal and obvious.
That includes the high-ranked scholars who perhaps don't have a clue about the analysis above, but as high-ranked individuals usually are, firstly, inclined towards social hierarchy nonsenses, and later on - additionally trained, as ones who are at the high ranks - they use to fight to keep the status quo which is in their favour.
The above holds for humans vs AGI in general. The humans, even the dumbest ones, put their head under the common hat of "humans" or "human race", and they have to declare something. "Humans are creative" or "intelligent" - even though that they as individuals are not, and are banal and mediocre like the "emotions", which are obvious, a solved problem in any well written literary work, dramatic piece, movie script etc.
In two words: Political bullshits, sophistry and mass-deluding nonsenses.
Even without deeper analysis, the verbs in the gushy articles regarding the Turing test are silly by themselves:
* "convincing judges": what is it, rhetorics? a trial? Or sophistry, lie, delusion?* "fooling users that it was a human" - lying, delusion
Indeed, that phenomenon is a reflection of the sick values of society. In the concept of "social intelligence", the hypocrisy, the ability to delude, to manipulate, to lie and to exploit others is of high value, and the fact that the society pretends that it "doesn't like hypocrisy", that "liying is wrong" is a part of the sick hypocritical nature of humans, especially the ones who are higher in the hierarchy.
(And humans believe they are higher than animals, machines etc., thus they have "their rights to decide" - instead of objective and obvious measures which do not require impersonated judges.)
True AGI is obvious
That is what has to be achieved - results that are obvious. Not ones that requre a "qualified" "judge" and it should be so from the beginning.
It's trivial to recognize a machine if it really is Versatile Limitless Self-Improver/AGI, and it doesn't lie and is not artificially slowed down and if it is sincere, self-aware [knows about itself, trully - a system that pretends that is was 13-year old boy, 15-year old... dog etc. is not self-aware, unless if it knew that it was pretending and was playing].
Also if it's too slow, too smart, too quick, too deep etc. it will be "different", compared to a human.
I myself am working on a machine that is to FAIL this silly test without taking it.
I guess I will fail it myself as a "non-human" - depends on... the randomness of the jury and how convinced they were of the possibility that a human who would compose such long and complex sentences, if she wished, would be selected to participate in such a test.
The exact level of intelligence (compared to particular person/s, age, education, social group etc.) in fine-grained terms and fine-grained measures in each particular domain, and in all combined should be rather obvious from the performance of the system without any artificial settings, as mentioned in 0.1.1.266 and later - everything is "a test of intelligence" should be obvious directly, especially as "intelligence" is a vague term "without concensus on it", aggregating all kinds of perception, prediction, planning, decision making, search, goal-seeking behavior, reinforcement learning capabilities, generalization and specification, increase of the resolution of perception and causation etc. etc.
A test is needed when it is not obvious, when it's somewhat brittle, unknown, BUT some authority has to "approve" it, to "allow it", a pretentious pedantry like the texts in the patents or the following joke:
We, the jury, declare, that the applicant #124353485943, after passing the Turing test with 33.467%, is the first artificial intelligence! Now according to the deputy of ministers of the Royal society of scientific council of the international organization of ABCD, that has leading scholars from the top universities and research groups in the field, we allow that this machine can be called "artificial intelligence" and we proclaim to call this date the birth od the true artificial intelligence. Amen!
By the way, that reminds me of the "birth" of the AI, as it was solemnly proclaimed to be the Dartmouth conference in 1956 - because some high ranked individuals from high-ranked Universities
See 0.1.1.203: "Man and Thinking Machine (...) " T.A., http://razumir.twenkid.com/
See 0.1.1.266: "Faults in Turing Test and Lovelace Test ..." T.A http://razumir.twenkid.com/
Regarding the sick,and ill defined concept of the society for "intelligence"
I suggest my latest major super multi-inter disciplinary sociological-philosophical-
It's a complete book of several hundred pages, has many independent and interacting "processes" and many threads of thought. One of the threads that is related to the Turing test besides that superficial social-ranking business is another one from that yard - the subordination and the obedience to impersonated beings are seen by society (the higher ranked and the slaves) as "intelligence".
The higher ranked ones are "smarter" because of their ranking - they are "qualified". The clueless have nothing but to cite the "qualified", since the former cannot discriminate by their own thinking the nonsense and adopt the authority.
Dogs are complimented as intelligent only because they are servants and are actually dumb enough. Here goes also humans' intrinsic love of slavery - fixated in the culture of thinking machines from the beginning - Chapek's robots and Asimov's positronic slaves with their naive inborn "morality" created to please slave-owners.
According to society (and the not very cognitively/epistemologically intelligent ones), your master (or God, superior, boss) should decide even about your intrinsic qualities - because you yourself can't know - most people do not know. Thus everything is converted into "social ranking" and "rights", the only intelligible or acceptable measures for the majority of people. A monkey/primate thing...
And there we are back to the Turing test - only a glorious jury, ellected by he Royal Committee of the Lord of Old-Willinshere North Dillignshane, Knight of the Lord of the... Bullshit - should decide whether a machine "has intelligence", because it passes Turing test - the value of which is approved because "Turing was one of the pioneers of the new field, contributor of.. amazing, extraordinary.":
- missing that that's the first test, one of the pioneers... - Turing himself may have denied it if he could live longer; that was what first came to his mind, he didn't have time to elaborate it, the noble society where he lived took it away
- the test is perhaps biased by the society where he lived with its particular manner of social ranking relations/academic "approval" style
- missing the nonsenses of it (which come out naked if one understands it)
The fixation of the first inarticulate and naive test as a standard, by the way, is another illustration of the "authority-obeying-citation" sickness in society, including the speculative sciences - philosophy was in that role some time ago.
Talking and exaggerating bombast nonsenses, without understanding that they are superficial or confused and avoiding questioning them, because "they were obvious"; actually - because they were approved by the unquestionable authority.
In sensory-motor generalization terms that is - lacking the facculty of judgement - as my "brother in mind" and his predecessor called the capability to connect abstract and concrete (regarding the terms of the translators in English).
That nonsense has deeply poisened the world - Europe was in the darkness of partial or complete mental retardation and madness for some 1000-2000 years due to such blind wrong undoubted "truths" coming from power-seeking "socially intelligent" social-hierarchy-climbing religious ideologists and rulers, whose concepts of "intelligence" was "social intelligence", master-slave relations; that is: political, social-hierarchy-related and "emotional", instead of epistemological and cognitive - "rational", of the Reason, related to "the faculty of judgment", "Objective" (in terms of as translated in English in the works of Kant and Schopenhauer).
* Someone may suggest "ethical" among "political" in the last sentence - not really, because the cognitive intelligence includes also ethics. Epistemological
My philosophical brother of mind Arthur Schopenhauer has elaborated on this topic before myself, Nietsche has written on that as well.
Ethics as well can be primitive and superficial or deep and elaborate. Simple "emotions" and social-ranking relations of subordination because somebody owns a higher position - with no questions why it has a higher position and should he - require less
Dogs and the dumbest people are clever enough to "understand" such kind of "ethics", while the individuals with the highest cognitive/epistemological intelligence often have "low" intelligence in this vicious and corrupted sense - even though they clearly demonstrate that the nonsense of society is far more clear to their minds, they can predict behavior and they know why things are as they are and thoroughly understand the society and humans. However they deny to obey, which is the major dog-like way to express "intelligence" in such silly ethical (political) systems.
Wednesday, May 28, 2014
РАЗУМИР - първи брой: Какво му трябва на човек? Играеш ли по правилата - ще загубиш играта. Първа част. | RAZUMIR - First issue. "What a man needs? Part I. Nice guys finish last"
Огромен аналитичен и забавен труд, т.нар. "мултиграфия" или "юнашка дисертация" (виж 132 от бележките). Първата част се състои от три паралелни части.
Ако се разглежда по секции и пр., може да се извлекат много десетки или стотина статии, ще има нещо такова в бъдеще - улеснения за четене и др.
В Приложението и Допълненията към т.242 има продължения по темата за машинното творчество (компютърно творчество) и "Вселена и разум" (Схващане за всеобщата предопределеност) като Хипотезата за дълбокото съзнание, свързваща и доизясняваща смисъла на мои спекулации с такива на Джеф Хокинс и философията на Кант и Артур Шопенхауер - смисълът на "интуитивен". И пр. пр. пр. т.н... Трудът е цяла книга.
Харесайте и подкрепете РАЗУМИР във Фейсбук:
Наясно съм, че в момента има приблизително нулева подходяща читателска аудитория (поне човешка), която да го оцени в цялост, и тези философии и социологии ми гълтаха от вниманието за по-функционалните разработки по задачите, които трупам на купчини до небето.
Трябва да се захващам здраво с тях.
Friday, May 23, 2014
A funny spatio-temporal-thematically physical coincidence:
More serious stuff is coming, although it is very amusing to study and read as well - the first issue of Razumir.
One of the first works under its heading is the satirical novella, from which the picture comes - "I am not creative!".
The main work is a voluminous multigraphy - it's both a big one, as a book it would be several hunderd pages long; and will have more volumes.
The word "multigraphy" was coined in order to mark that its super multi- and cross-disciplinary.
Sunday, May 11, 2014
РАЗУМИР - Анонс за основната студия в първия брой на списанието на най-могъщата мисъл | RAZUMIR - announcement of one of the first major works to be published
Monday, April 21, 2014
Razumir - the Revival of one of the first AGI-related e-zines "The Sacred Computer" | РАЗУМИР - възраждането на юнашкото списание "Свещеният сметач"
Thursday, April 10, 2014
Human Naked Intelligence of Average People is Going Backwards and the Human Cognitive Capacity Augmentation-Extending is a Matter of Degree
A comment of mine on a post that I encountered today, the full version of a partial one that I left there: http://www.singularityweblog.com/podcamp-toronto-2014-hole-ai-transhumanism-end-of-humanity/
It is addressed to the autrhor of the presentation and the article, see in the link.
I'd question that the exponential growth is hard to understand, it's rather a trivial mathematical concept (geometric progression), that is multiplication and sequences of multiplications - rather than additions. The example with the chess is an illustration of where an inconsiderate decision may lead, and the wisdom of the master.
However human sensory system is exponential even at its low level and multiplication is basic maths.
I'd question also the "intuitive" truth that everything is accelerating - one important thing that is not, which is going backwards, is the average/ordinary people's intelligence, when they are "naked".
Dumb people with smart technology can do more sophisticated tasks now, than some clever men in the past, but that's only apparently, and that intelligence is in the machines and in the other clever (or already "augmented") humans - who have provided it already and accumulated it there. [There was some article regarding that, for the time-traveler and a woman with a smartphone behind a curtain] People mostly just click buttons and tap on colourful icons, fighting for their attention. "Sixteen-core" smartphones with 4K cameras and 4 GB RAM - used to take pictures of meals or 100 shots of the night at the disco, or to send an 80-characters of a tweet, with an opinion "Listening to ... Feeling amazing". That's the sad "progress" of average people's intelligence...
For example most computer users, even engineers, do not have enough talent to conceive or design a computer from scratch, even an ABC-UNIVAC-like, they know how to push buttons the right way.
Nowadays the average people still can't play musical instruments, draw decently, dance decently, write stories and do lots of otherwise trivial activities, which talented people have done thousands of year ago. These ones still appear as "magical" for ordinary people - they just cannot get it. [See "multidomain and inter-domain blindness" in this blog and on the AGIRI list discussions.]
That is about to change, but the intelligence would be in the machine anyway, if you create something with a click of a button and you do not understand it deeply enough to model it with other means, it's the technology that does the job.
The technology, namely the dopamine-related short-circuits that are shocking the prefrontal-cortex through the exposure ot television, cheap reinforcement learning reward-cycles on social media, computer games and all sorts of blinking random pieces of images and text online, all these make the majority of people ever more superficial and having shorter attention span.
The questions which appear to them as "ground-breaking" or "unanswered" or "scandalous" were clarified and "scraped" down to their conceptual bones long ago, not only (or not by) the "VIP" figures that you mention, and these ideas are neither that new, nor that revolutionary.
Humans are already "augmented", every technology is extending their capacity, the "physical" merging and when it starts is a matter of a degree, both spatio-temporal and of effectiveness. The boundary is also not that sharp and is artificial - the retina is considered a part of the brain, the lenses of the eye and the pupil - they are also doing "preprocessing" - projecting and focussing. The only obvious "selfness" of the receptors in the body is that they have parts that are living cells, but without stimuli the receptors do nothing. For example, they say that blind-deaf people do not even try to explore the world, if left untouched (physically). They just freeze and stay, there's no external sensory stimulation, and their cognitive capacity is useless.
There are also philosophical views, such as externalisms, and the "extended mind" which point out the obvious fact, that humans are tightly bonded with the "tools" in the external world. We've been using the environment to do cognitive jobs in all times - I'd say that sensing itself is a form of basic preprocessing, in philosophical terms it's converting of the "thing in itself" in Kantian sense into phenomenons. The degree and the way of doing it is changing, and making people say "wow" that something is "new, revolutionary, ground-breaking" - it's a trick from marketing and propaganda, a boring way to grab attention of people who don't really care about the essence, but just about anything shocking, "new", "extraordinary", provided by some high-status "prophets". I assume talks about aliens, UFO or some religious mysteries may provoke similar interest...
The "concessus" among the wise men who are working in that theoretical field, for example on "what is human" (what "should be considered", apparently, or what humans think is human, or why humans want that they are "special" and how they rationalize that etc. etc.) is another topic.
Reaching to a concensuss is impossible between people with hugely varying intelligence, the majority wouldn't even get the real questions. It's rather about political sciences, public relations, pleasing the audience or scaring it out - as explained above.
IMHO the most deep works will never reach the minds of the ones who are not smart enough, and the superficial discussions and making people "involved" with the subjects are not understanding, it's just impressions, well a sort of... "techno-impressionism".
Current average people and also most of the "clever" ones still do not understand centuries old philosophy or also science and maths (pick for example Calculus), they may only see some illustrations, flashy expressions of the deep truth from such works, but they will not *understand* the "mechanics" behind, just "feel" something emotional or a shadow, or recite some words related to it.