By the way, recently I was "scandalized" by yet another "ground-breaking" new book [1] with a "new" hypothesis that's published at least 10-15 years late to kids like me or some "cranks" back then... Good job for the "real" researchers from famous and powerful institutes - they finally start to get it. WOW and LOL...
Do you believe in the VIP futurists' predictions of 2029 or 2049 or whatever "advertising-like" figures?
I don't.
I'd say - similarly to the above 10-15 years delay of these "ground-breaking" books, and in my opinion the nonsenses of the Oxford's expert predictions regarding how secure different professions were, depending on their possibility to be automated in the near future. [2]
...
Regardint the timing concerns, I feel myself being "late".
As Alan Kay is quoted to have said:
The best way to predict the future is to invent it!
In my estimates there could have been thinking machines, at least at my level of versatile intelligence* at least 5 years ago, possibly earlier, given the talented ones had opportunity to express their talent in timely manner back in the 90-ies and they took up appropriate positions in society.
* Except in physical/agility domains which require particular mechanical bodies
* Actually if it reaches there, it should jump to super-level immediately. The system that I am building will be super-human in all creative fields soon after the early booting/growing up stage when it will be like a little baby and a child.
Well, sooner or later, it is a convergeant process.
Wherever you direct yourself, all roads eventually lead there - as long as you are versatile limitless self-improver with enough of memory and enough of time to scan the space, and as long as you can "check" the domains and coordinates where you have passed, draw and paint the entire map and connect all "dots" together.
Humanity as a whole, the whole civillization, includng the technologies and all resources which are crucial innovators themselves (and have been always essential part of the novel contributions), have versatile ("really general", universal) intelligence. Individual humans, even someones that appear "very gifted" - not really, unless they are versatile.
IMHO all necessary technologies are ready and waiting for someone to make a good use of them, and to put all together, and many processing operations, which are usually considered among the highest "creative" ones in arts and science are just obvious to me at least, and in my opinion to anyone who has the talent, skill and experience in that art or field or anything.
It's a problem when programmers or philosophers who are artistically "disabled" try to deal with it without understanding it operationally (be able to apply and practise the art), such as one often repeated nonsense in one of the AGI forums: "the hard problems of arts, language, human behavior".
I always ask: so what exactly is hard about which art? I don't see anything hard in any art, all is obvious. Sometimes it is obvious even in single examples, in every single frame in a movie, or in each page or chapter of a novel, or in any single artifact of particular class and it's repeating everywhere - that's why it's "general" and what makes the exemplaires of that art a class.
These obvious things however are hard to be grasped by ones who lack "senses" in the appropriate modality - I have tried to explain some of the obvious points in some artistic pieces of work, for example caricatures and how obvious that art is, actually, to an AGI community in the AGIRI list. I don't know if the ones who saw the point just kept silent, but I got only shocking answers, and "radical novelty" and how "amazing" human creativity and "impossible" to be be done by computers - from people who apparently were not creative in these domains and can't draw.
Human behavior - the same. People keep talking that emotions are something hard to emulate - ask some animator or writer or actor or director. What exactly is hard about expressing or emulating plausible emotions and sequences of emotions in appropriate contexts?
Language semantics? What didn't you understand - open a dictionary, a book, see examples, a video, images, drawings, ask somebody and last, but not least - think, search, experiment. If you fail to understand the meaning now or if you don't progress with practice - it's your lacking or insufficient intelligence/talents/learning capabilities/memory capacity/... which prevent you to understand the core "atomic" concepts, operate them and then move forward and accumulate and move up and up and up - it's not the "hard problem of language, art, vision".
The random parts in all of the above, or the "irrational" ones - they are the easiest ones - just pick a random one among a set of possibilities.
If you can answer the question what exactly is hard etc. and go deeper and deeper - the problem is solved. If you can't and can't ask further - then you apparently don't understand what you're talking about and probably lack the appropriate "instruction sets" of the mind.
All kind of processings are available, computing power for the "normally" funded institutes and companies is excessive. Guys, it is really excessive!
People don't know how to utilize this monstrous power - except IBM with "Watson", maybe... :)
...
AGI on a PC
My ambitious aim is making AGI running on an average nowadays desktop PC/laptop, even on a 6 or even 7-year old Core 2 Duo PC with just 3 GB RAM in 32-bit mode and a mid-range 2007-2008 GPGPU-enabled GPU, and one or more web cameras and microphones and access to the Internet. Of course, I realize I might be too optimistic, but in my less optimistic predictions a mid-range 4-core 2013's CPU with 32 GB RAM, say Core i5 4570 or Ahtlon FX 6300 and a contemporary high-end GPU should make it anyways.
Sure, many believe that PFLOPS or whatever are required, but have no real explanation why they really need it, besides some super inefficient machine learning experiments that shoot flies with hydrogenic bombs or some nonsense estimations of the number of neurons and stuff like that.
Something else that comes to my mind - super human processing. You don't need to render photoreallistic graphics in 4K in 60 fps for a human-level intelligence - humans are terrible in rendering anything. Computer graphics has been a super human activity since its birth, it just goes further and further superhuman - most people could hardly draw a decent cube in perspective.
Human passive vision is of course more powerful, though. Humans do notice when something is not photorealistic, appears"wrong" - wrong illumination physics, shadows' directions, reflections etc., but IMHO that suggests how trivial and obvius it really is - more on this later.
Most tasks that are solved by supercomputer or any computers today are intrinsically superhuman, the real problems of versatile intelligence in my opinion are trivial and easy, once they are approached right - humans do not have PFLOPS.
All these fake "PFLOPS" inside brain are eventually reduced to a few bytes of intentional output (and the intentions are essentially a few bytes long) - because most of these "PFLOPS" and "PBYTES" of "data" inside the brain don't really matter, are not accessed, are not really data (not like bytes in a general-purpose memory), are not required in other conditions (a thinking machines doesn't need to balance 600 muscles) and are "there" due to inefficient design and lots of useless "recalculations" each time - there wasn't a better way to do it with proteins.
Surely the 2007-2008 machine won't see the world in real time 60 fps stereoscopic Full HD 1920x1080, but yet I don't see any computational reasons why it won't see clearly and smoothly in real time at 15 fps at 160x120 and doing a lot of other things - using the CPU only - at least for some domains/visual cases, and in higher resolution and higher framerate for other domains and cases or if a higher load is allowed. Of course for some hard domains or cases it might be just 1 fps at 160x120 or 0.05 fps at 100x60 or even 0.001 fps for whatever resolution, when it has to decide something important in 1000 seconds.
...
Similarly, when doing some tasks which for humans require heavy use of vision, due to the super minimalistic amount of human memory of all kinds (don't listen to the nonsenses of the petabytes of human brain, when you can hardly remember 50 or 100 lines of trivial code, or 10-figure telephone number), that same aging computer could easily "see" and change the world at the equivallent of 1000 or 10000 or even 100000 fps (instead of just 5 or 1 or 0.1 fps for humans), because it can focus exactly where it should, find and see exactly and directly the item that it cares for, take it, do what it wants to do etc., while using just a few instructions in a few nanoseconds or microseconds.
It doesn't have to make clumsy saccades with the eyes, blinking, visually locating and clicking on icons, moving the slow hands and fingers, pressing Ctrl-Space etc., then waiting a list to appear, then seeing the items on the list - deciding should one scroll down or up - moving the mouse, or putting the finger on the wheel and moving up-or down, then clicking, seeing the change, reading it etc.
Humans do so many and so slow and useless cognitively "sophisticated" operations of vision, reading, character recognition, muscular coordination, various kinds of memory recall and executive functions, because they cannot optimize these operations by using more efficient shortcuts. Brain is so clumsy that always when it has to deal with such symbolic data, it's bound to pass through slow operations and long hierarchical memory calls and muscular transactions, even for these so dumb and "mechanical"* operations.
* Regarding the philosophical semantics of mechanical - that's a topic of its own, I mark it here.
Indeed, that reminds me a story that I accidently recalled yesterday, while searching for something else in old archives of mine. It was an excerpt from an early book on AI by ~Donald Fink (Доналд Финк), first published in 1966. By the way, the book mentions the IBM's early "Watson" and "Deep Blue" - IBM 7094 playing checkers. That's not the story I meant now, though.
My point is about learning and one species of wasps that stings crickets, lays her eggs in its still living pray's body, then finds a hole in the ground to put it inside, leaves it near the front of the hole, enters inside the hole to see is it safe, then returns out and drags the poor cricket inside the hole.
If the cricket was moved by the experimenter, while the wasp was inside the hole, the wasp would always drag it only to the entrance, leave it there and then go and check the hole again. And again, and again.
That's the same what the "amazingly adaptive" brain does for many tasks. No matter how many times you look at the code - you will not remember it by heart and you will always have to do a lot of laborious and otherwise useless operations in order to recall the details, if you do programming manually.
To recapitulate again: humans need these "sophisticated" processes for the simple tasks due to the non-sophisticated and quasy general purpose brain. It has general (versatile, multi-modality) input and general (covering the target 3D-space) output - actuators, and somewhat general "built-in" sound output (general enough to allow discrimnation of sounds), and it also does general prediction/compression, general comparison/discriminatin/classification and, in general - "general generalization" - the best things that it does.
However the processing if the data and the optimization of the processes, load balancing and others are not that general as they are for example in a general-purpose computer, and the evidences show that there are low-level "modules" - the expressions of genetic or epigenetic differences - which make some people talented for data in some modalities, while others are not. And in general, humans cannotlearn and progress in all modalities, beyond basic and poor levels.
Brain have versatility bottlenecks, it is quasy/paradoxically/pseudo Universal without external tools and engines, and hits silly memory "walls", just like the wasp.
One of the elegant points in AGI is that it can adjust its resolution and span of search and understanding according to the current immediate goals and the available resources. Higher resolution is just a quantitative problem - it is not the substantial problem, a versatile intelligence with more resources will work faster and reach further.
An AGI is needed that work at some meaningful and general enough resolution, the rest is just an upgrade of the hardware - which is excessively fast.
At least in my architecture AGI intrinsically works in constantly varying and adjustable resolution of perception, causality-control and attention span, that is varied both subjectively by the machine itself and objectivly by the specific sensory data and records that it encounters or recalls or searches etc.
As of the excessive resources - did you know, the other competitors in this race have now tenths of millions of dollars investors' funding, some have hundreds of millions and potentially billions?
One thing that has been severely slowing down everything of the "competition" though, and that prevented AGI to come to existence years ago is the multi-inter domain blindness and the inappropriate "division of labour". Somebody may understand machine learning, the best new techniques, or solve very high abstraction differential equations, but being unable to play a little blues on the guitar, or make smooth dance moves or being unskilled in drawing, so unskilled that a 5 or 7-year old talented one is better. What's the problem with your brain, men??? That's not versatile intelligence.
My roadmap?
My roadmap is several months back in my own last year's schedule, due to nonsense distractions and wrong decisions/bad efficiency in some tasks, and also because some meaningful "other things", such as some major social science works (a big and funny one in Bulgarian, that's unpublished yet) and music making, which was inspired by the process of writing that work and is part of it.
It could have been more efficient, but it provides some data for introspective and creativity processes analysis.
The "roadmap" is also a flexible thing, some shortcuts or alternives or already-made tools are constantly being discovered, tested, experimented, adopted; or some sub-projects get postponed or receive more focus than expected.
Versatility gives advantages, such as huge sources of ideas, I try to see clues everywhere, but it has also by-effects such as distractability even when doing some "meaningful" things, since you seem to be able to improve your skill and understanding in every direction. The latter effect causes also "livelock" - too many tasks, all of which are doable and part of the whole, so you want to solve and understand all, which causes the prioritizing system to suffer. Prioritizing is much simpler if one has one sharp talent and is poor in the others potentially distracting fields.
If I could be a leader of a multi- and inter-disciplinary team, it would have been different, too, but that would be in some other universe.
For example, lately I've been working on a new satirical absurd story, it's genre is probably of a novella, because it has too many words for a short story. It's both very serious and deep, containing audacious messages against social nonsenses and hypocrisy, and very laughable, the main character is a 7-years old boy.
Besides its artistic and literature/stylistic aspects and humour, its topics and the process of its creation depict/are related/are used for analysis to/of creativity, linguistics, socio-linguistic and language development (why and how Bulgarian lexicon has changed due to specific international and social-ranking related "natural" laws of language development, an old interest of mine) and various general nonsenses of human societies and norms, both world-wide and some local.
This work also has also funny illustrations with their specific style appropriate for the story, which are drawn and painted by myself.
By the way - drawing, painting and writing are some of all things that I need to get done much faster I want to do them in a blink of the eye in order to be able to realize all creative projects that I collect in my "drawer" through the years, and I am working on this - it is in the roadmap.
Indeed, there's one very simple insight, that I recall after telling the above which versatility suggests... That I'll save for the introduction of some demo later.
[A.A.B.S.M.D.T.A.O.W.]
(...)
You probably know about "CALO", I have my own "CALO", with a very long history of its conception and first incarnation (the "comprehension assistant/intelligent dictionary Smarty) - research/cognitive/everything accelerator, yet far from the shape that I want it to be.
I am using some little "embryos" and experiments of those old ideas since many years, however the implementation started to grow like a bamboo and improving my productivity in late months. It has been in different environments - there are prototypes in Java and in C#, there are tools in C++ as well. It will shine when I complete some of the Virtual-Machine-related milestones, which actually go much further than just a VM.
There are also general software R&D decisions and integration stages, that have to get completely sorted out before it becomes a real beast, within my so called "software infrastructure".
I've been also thinking on and improving my overall methodology of working, for example I like some "low-tech" physical/mechanical tools.
Could I be more specific?
Should I do it in an "informal essay"... Some ideas are so obvious and trivial (if you do understand them), but the best presentation require context. Anyways, for now I consier the best way to protect them is to keep them private, and first show the outcomes of the application of my ideas, the "by-effects".
One of them is my improved productivity. Then I may show complete systems that can be protected at least as public evidences and can't be just taken as "anonymous ideas from "informal essays""
There are some direction-works and digests/compilation of discussions of mine with some added notes, which I've been writing for publication, and one interview which got too big, I've been deciding to withdraw them for now, the "interview" may appear later, as well as some social science publications.
For example some simple novel insights - another elegant point of view, a definition of it, that I saw back in late 2012, which was an answer to an article related to Schmidhuber's creativity works, which I claimed that is connected to my earlier claims in the early 2000s works, regarding creativity and compressions (browse late 2012 posts here).
I saw flaws in those general hypotheses, though, and wrote a lot of ideas down, but it grew too big, and I left the paper in the "drawer"...
(...)
A working title of this specific work is ("encoded"): L.P.O.T.I.A.C.A.O.
I want to push the software infrastructure to the key points of integration though, and with its support produce some related data and software in an easy and smooth way, and also generalize further those ideas, after closing the sensory-motor feedback. I can do it "the hard conventional way" to some degree, it's possible to post some preliminary version or the ideas anyway, I'll see that later.
When?
Many basic-level modules for my software infrastructure are done or almost done or in use (but not yet very convenient), or done in one way and need to get more modular or done in another environment; or were conceived long ago, but waiting for their "time-slot" to be allocated by my "operating system dispatcher".
Some tasks/experiments/directions for exploration and experimentation and implementation are conceived, sketched shortly, designed and scheduled sooner or longer ago, some more than a decade ago, waiting for the supporting technologies to be fully developed, in order to allow their full realization to happen more easily.
This is one of the phenomenons that postpones some technologies - I can develop them and test some hypotheses posted "conventionally", however I know that I could create them and also 10 or 50 more other projects elegantly in a breeze once I have developed the more general technology, which would take its time, though.
Some aspects need a few more components to get finished, including some of the above or assisting some of them, in order to start serving their full purpose, such as my custom Virtual Machine, which is more than yet another VM, its creation has also other research and engineering goals, experiments, paths and challenges.
Overall - a whole lot of things are "one hand away" and "almost done" or done, but not yet well integrated, but they soon will be, after collecting the appropriate amount of attention span.
Some of the implementations immediately increase my productivity, and decrease distraction and context-switching overhead, which is significant in my manner of work and my situation.
Human-Computer Interaction is obviously an important direction, as I've mentioned 6 years ago, however it is always connected to more general-purpose directions which are there to simplify the HCI development, and HCI is also there to simplify them.
...
After completing what I've been started to the milestone points, I forsee that it is possible to have a significant boost in productivity in all domains where I operate, coming possibly in a few months. That means for example (some items which come to my mind immediately, the list is not complete): all creative arts in all sensory-motor modalities (from simple drawing to complete movie making - from basic editing to visual effects synthesis and compositing, to music composing, arranging and performing; to creative writing and editing; everything conceivable), social sciences, linguistics, socio-linguistics research, language learning/acquisition, comparative linguistics; general education; NLP, NLG; intelligent and more efficient search; faster input of any kind of data, faster comprehension of everything, faster operation with anything; philosophical research, theoretical neuroscience - philosophical-cognitive-psychological-... connected with general theory of intelligence; general research of anything; and of course - tremendous speed-up in general software design and engineering in any computer language and environment - that's something where I'm building up a HUGE boost, which is critical for my overall software infrastructure.
I forsee also that soon after these boosts, having the technologies that I need already available, I will be able to implement the first breakthroughs in the AGI prototyping process, my first complete "embryo" of a universal human-level and human-like thinking machine, a versatile-limitless self-improver.
Sure, one cannot predict all possible distractions and tactical adjustments, so it may be 6-months or 12-months from now, or it may be more in some worse case scenario, but it certainly is approaching.
If I don't manage - the competition may do, and it is not only in the companies and rich research groups in the USA, which are manifesting themselves as working on a thinking machine.
Some of my software-engineering projects have competition in apparently "ordinary" computer-science and IT fields, but as a by-effect of the common multi-inter domain blindness, many people from these more "engineering"-like areas whose work is related to Artificial General Intelligence do not realize that, and cannot see the big picture where their developments fit or could fit. Not yet, at least.
And let me finish this "exercise in English and writing", with another little insight:
Everything is about AGI. Every second of experience, every sensory record and specific data and structure from every scientific, engineering, philosophical, artistic, linguistic, social sciences, sports, daily life or whatever other domain.
EVERYTHING. As long as you do observe, really understand it and can fit each of these little pieces together in the multi-dimensional puzzle in the big multi-dimensional picture. It requires that one can see all - from the tiniest pixels and little details at the closest possible distance, to the overall look at different longer distances and from different angles, that expose all of the orthogonal dimensions.
...To be continued...
[1] Thanks to V. who has notified me for the existence of that, yet another, "ground-breaking" book.
[2] Will tell later what I mean with this note.
2 коментара:
do you meditate my friend?
if yes how would you explain this process in humans in a syntactic manner?
if no, then sorry to bother you.
Hi abhi, thanks for your comment - it depends what you mean by "meditating" and "syntactic manner".
If reflection and introspection is meditation, than I do. Otherwise, I don't follow particular spiritual movements or something.
If the syntax can be multidimensional and can be spread in time and space (time being one of the dimensions), then I think that I could, however Natural Language without addressing a real model of mind would not be appropriate - an artificial mind has to be built in order to allow the syntax to address the modules and what they do.
Also the syntax would be a description of a process ("a serialization") - a big representations, flow, transformations, recalls etc.
...
If you mean something related to qualia or whatever - I assume that there are things that cannot be measured or converted out of their concrete real physical context, there are things that we couldn't know or represent differently than their lowest level representation, I call this level: "machine language of the Universe".
However that phenomenon has two sides.
An electronic machine cannot "feel" exactly the way humans feel due to its different physical substrate, however for the same reasons humans cannot feel what a machine feels.
Claiming that machines cannot feel, because they are not humans, would imply that humans feel or whatever *because they are humans*, implying - because of their specific physical substrate and physical processes (or something religious or whatever).
If humans say thet machines don't have qualia or feelings, and they can feel machines feelings as "1 and 0" or whatever, a thinking machine that is smart enough could answer in a similar manner that humans feelings are just "neurotransmitters, hormones, proteins, neurons, blood pressure, " or like "1 and 0" - just "excitation and inhibition", and that humans are just "a piece of flesh".
Post a Comment