A comment of mine on a post that I encountered today, the full version of a partial one that I left there: http://www.singularityweblog.com/podcamp-toronto-2014-hole-ai-transhumanism-end-of-humanity/
It is addressed to the autrhor of the presentation and the article, see in the link.
I'd question that the exponential growth is hard to understand, it's rather a trivial mathematical concept (geometric progression), that is multiplication and sequences of multiplications - rather than additions. The example with the chess is an illustration of where an inconsiderate decision may lead, and the wisdom of the master.
However human sensory system is exponential even at its low level and multiplication is basic maths.
I'd question also the "intuitive" truth that everything is accelerating - one important thing that is not, which is going backwards, is the average/ordinary people's intelligence, when they are "naked".
Dumb people with smart technology can do more sophisticated tasks now, than some clever men in the past, but that's only apparently, and that intelligence is in the machines and in the other clever (or already "augmented") humans - who have provided it already and accumulated it there. [There was some article regarding that, for the time-traveler and a woman with a smartphone behind a curtain] People mostly just click buttons and tap on colourful icons, fighting for their attention. "Sixteen-core" smartphones with 4K cameras and 4 GB RAM - used to take pictures of meals or 100 shots of the night at the disco, or to send an 80-characters of a tweet, with an opinion "Listening to ... Feeling amazing". That's the sad "progress" of average people's intelligence...
For example most computer users, even engineers, do not have enough talent to conceive or design a computer from scratch, even an ABC-UNIVAC-like, they know how to push buttons the right way.
Nowadays the average people still can't play musical instruments, draw decently, dance decently, write stories and do lots of otherwise trivial activities, which talented people have done thousands of year ago. These ones still appear as "magical" for ordinary people - they just cannot get it. [See "multidomain and inter-domain blindness" in this blog and on the AGIRI list discussions.]
That is about to change, but the intelligence would be in the machine anyway, if you create something with a click of a button and you do not understand it deeply enough to model it with other means, it's the technology that does the job.
The technology, namely the dopamine-related short-circuits that are shocking the prefrontal-cortex through the exposure ot television, cheap reinforcement learning reward-cycles on social media, computer games and all sorts of blinking random pieces of images and text online, all these make the majority of people ever more superficial and having shorter attention span.
The questions which appear to them as "ground-breaking" or "unanswered" or "scandalous" were clarified and "scraped" down to their conceptual bones long ago, not only (or not by) the "VIP" figures that you mention, and these ideas are neither that new, nor that revolutionary.
Humans are already "augmented", every technology is extending their capacity, the "physical" merging and when it starts is a matter of a degree, both spatio-temporal and of effectiveness. The boundary is also not that sharp and is artificial - the retina is considered a part of the brain, the lenses of the eye and the pupil - they are also doing "preprocessing" - projecting and focussing. The only obvious "selfness" of the receptors in the body is that they have parts that are living cells, but without stimuli the receptors do nothing. For example, they say that blind-deaf people do not even try to explore the world, if left untouched (physically). They just freeze and stay, there's no external sensory stimulation, and their cognitive capacity is useless.
There are also philosophical views, such as externalisms, and the "extended mind" which point out the obvious fact, that humans are tightly bonded with the "tools" in the external world. We've been using the environment to do cognitive jobs in all times - I'd say that sensing itself is a form of basic preprocessing, in philosophical terms it's converting of the "thing in itself" in Kantian sense into phenomenons. The degree and the way of doing it is changing, and making people say "wow" that something is "new, revolutionary, ground-breaking" - it's a trick from marketing and propaganda, a boring way to grab attention of people who don't really care about the essence, but just about anything shocking, "new", "extraordinary", provided by some high-status "prophets". I assume talks about aliens, UFO or some religious mysteries may provoke similar interest...
The "concessus" among the wise men who are working in that theoretical field, for example on "what is human" (what "should be considered", apparently, or what humans think is human, or why humans want that they are "special" and how they rationalize that etc. etc.) is another topic.
Reaching to a concensuss is impossible between people with hugely varying intelligence, the majority wouldn't even get the real questions. It's rather about political sciences, public relations, pleasing the audience or scaring it out - as explained above.
IMHO the most deep works will never reach the minds of the ones who are not smart enough, and the superficial discussions and making people "involved" with the subjects are not understanding, it's just impressions, well a sort of... "techno-impressionism".
Current average people and also most of the "clever" ones still do not understand centuries old philosophy or also science and maths (pick for example Calculus), they may only see some illustrations, flashy expressions of the deep truth from such works, but they will not *understand* the "mechanics" behind, just "feel" something emotional or a shadow, or recite some words related to it.
Thursday, April 10, 2014
Human Naked Intelligence of Average People is Going Backwards and the Human Cognitive Capacity Augmentation-Extending is a Matter of Degree
A comment of mine on a post that I encountered today, the full version of a partial one that I left there: http://www.singularityweblog.com/podcamp-toronto-2014-hole-ai-transhumanism-end-of-humanity/
Monday, April 7, 2014
Спомен за "Схващане за всеобщата предопределеност 2" и за борбите с "университетската философия" | Memory of the epistolary Conception of the Universal Predetermination, Part 2 - from the Teenage Theory of Universe and Mind
Припомняне за "Схващане за всеобщата предопределеност 2" - един от основополагащите трудове в моята философия и светоглед, в основата на работата по най-съществените дела, с които се занимавам и са все по-близо до зрелищния си завършек.
Схващане за всеобщата предопределеност 2По онова време, 2001-2002 г. започвах да осъзнавам какво представлява умът и човешкото поведение и пр., а през 2003 и началото на 2004 г. нещата станаха още по-бистри (виж следващите публикации). При по-добро стечение на обстоятелствата, според мен към 2009-2010 г. прозренията можеше да са част от ума на пълноценни универсални мислещи машини.
Представлява страстен философски епистолераен диалог между 18-годишното ми "аз" и опитния философ, писател и общественик Ангел Грънчаров, с около 25 години по-възрастен от мен.
За съжаление това разбира се бяха неосъществими фантазии, защото светът страда от "отровата на обществото" (нещо за което съм писал по различни поводи без да ползвам този израз, и ще пиша по-съсредоточено по темата по-късно), а талантът сам по себе си няма никаква стойност както сега, така и преди векове.
Напоследък откривам, че борбите на Артур Шопенхауер с тогавашната "университетска философия" (преди 150-200 години)** са близки с моите борби с университетския псевдо изкуствен разум ("изкуствен интелект"), със заблудената академична обработка на естествен език (NLP) - виж What's wrong with NLP..., - която подобно на масата в "официалния" ИИ не можеше да разбере сетивно-моторния произход на езика и че трябва от него да започне*.
С панаира на суетата, предвзетостта, слугинството и сляпата служба на "авторитети", фалшивата "аристократичност" и лицемерие (за да публикуваш на конференции и да те признаят, трябва и да си платежоспособен или да служиш на богат господар); стремежът към количество, а не към качество и пр. и пр.
Към епистоларния труд включвам и следния по-кратък диалог, 10 години след "Схващане за ...".
Диалог между Тодор Арнаудов и Бен Гьорцел: http://goertzel.org от 12/2012 г.
- с паралелен превод на български
Dialogue between Todor and Ben Goertzel (from AGI-List, in English and Bulgarian)
On "Five Principles in Developmental Robotics" - Matches of Todor Arnaudov's works from 2003-2004 to a 2006/2009 paper
Диалог относно "Пет принципа на Роботиката на развитието - съвпадения на твърдения от публикации на Тодор Арнаудов от 2003-2004 г. като тийнейджър с такива от "официални" научни публикации от 2006 и 2009 г. "
Към списъка със статии в него бих добавил и една съвсем новоизлязла книга, от един оксфордски философ, но не искам да й цитирам името и да й правя реклама...
Диалогът би могъл да се разшири, по още теми, но това ще стане най-накрая събера и реша да приложа на куп извадки из мои коментари от имейл списъка на AGIRI.
* Вече, с десетина години закъснение, почват да разбират - с помощта на всички други науки,които правят нещата все по-очевидни и недвусмислени дори и за все по-"слепи" и ограничени изследователи, и чрез авторитета на някои колеги.
** Артур Шопенхауер, "За университетската философия", изд. "З. Стоянов" 2009.
*** Виж и http://artificial-mind.blogspot.com/2014/01/20-in-bulgarian-issues-with-like.html
Недостатъци на гласуването с харесвам/нехаресвам в Уеб 2.0 и социалните медии и различни дефекти в системите за обществено оценяване и класиране
Sunday, March 23, 2014
Не съм креативен, пък! - илюстрована сатирична новела | I'm not creative - an illustrated satirical novel
Забавлявайте се с новата ми творба: Не съм креативен, пък!
Илюстрована сатирична комична новела - смешна, истинска, мъдра и "скандална". Абсурдите в света през очите и приключенията на 7-годишното будно момче Гошко. Основният въпрос в това художествено произведение е продължение на публицистичната социолингвистична статия за "Креативната безидейност", виж публикациите от есента.
Повече инфо: http://twenkid.blogspot.com/2014/03/im-not-creative-satirical-novel-by.html
Saturday, March 1, 2014
What's going on? Everything! | Predictions | The Software Infrastructure | Creativity | Insights | L.O.P.T. - I.A.A.O.C
Guys, I am here - improving my "state-of-the-art" and moving my implementations forward, however I've been keeping the piles of tasks, plans, work, directions and roadmaps private.
By the way, recently I was "scandalized" by yet another "ground-breaking" new book  with a "new" hypothesis that's published at least 10-15 years late to kids like me or some "cranks" back then... Good job for the "real" researchers from famous and powerful institutes - they finally start to get it. WOW and LOL...
Do you believe in the VIP futurists' predictions of 2029 or 2049 or whatever "advertising-like" figures?
I'd say - similarly to the above 10-15 years delay of these "ground-breaking" books, and in my opinion the nonsenses of the Oxford's expert predictions regarding how secure different professions were, depending on their possibility to be automated in the near future. 
Regardint the timing concerns, I feel myself being "late".
As Alan Kay is quoted to have said:
The best way to predict the future is to invent it!
In my estimates there could have been thinking machines, at least at my level of versatile intelligence* at least 5 years ago, possibly earlier, given the talented ones had opportunity to express their talent in timely manner back in the 90-ies and they took up appropriate positions in society.
* Except in physical/agility domains which require particular mechanical bodies
* Actually if it reaches there, it should jump to super-level immediately. The system that I am building will be super-human in all creative fields soon after the early booting/growing up stage when it will be like a little baby and a child.
Well, sooner or later, it is a convergeant process.
Wherever you direct yourself, all roads eventually lead there - as long as you are versatile limitless self-improver with enough of memory and enough of time to scan the space, and as long as you can "check" the domains and coordinates where you have passed, draw and paint the entire map and connect all "dots" together.
Humanity as a whole, the whole civillization, includng the technologies and all resources which are crucial innovators themselves (and have been always essential part of the novel contributions), have versatile ("really general", universal) intelligence. Individual humans, even someones that appear "very gifted" - not really, unless they are versatile.
IMHO all necessary technologies are ready and waiting for someone to make a good use of them, and to put all together, and many processing operations, which are usually considered among the highest "creative" ones in arts and science are just obvious to me at least, and in my opinion to anyone who has the talent, skill and experience in that art or field or anything.
It's a problem when programmers or philosophers who are artistically "disabled" try to deal with it without understanding it operationally (be able to apply and practise the art), such as one often repeated nonsense in one of the AGI forums: "the hard problems of arts, language, human behavior".
I always ask: so what exactly is hard about which art? I don't see anything hard in any art, all is obvious. Sometimes it is obvious even in single examples, in every single frame in a movie, or in each page or chapter of a novel, or in any single artifact of particular class and it's repeating everywhere - that's why it's "general" and what makes the exemplaires of that art a class.
These obvious things however are hard to be grasped by ones who lack "senses" in the appropriate modality - I have tried to explain some of the obvious points in some artistic pieces of work, for example caricatures and how obvious that art is, actually, to an AGI community in the AGIRI list. I don't know if the ones who saw the point just kept silent, but I got only shocking answers, and "radical novelty" and how "amazing" human creativity and "impossible" to be be done by computers - from people who apparently are not creative in these domains and can't draw.
Human behavior - the same. People keep talking that emotions are something hard to emulate - ask some animator or writer or actor or director. What exactly is hard about expressing or emulating plausible emotions and sequences of emotions in appropriate contexts?
Language semantics? What didn't you understand - open a dictionary, a book, see examples, a video, images, drawings, ask somebody and last, but not least - think, search, experiment. If you fail to understand the meaning now or if you don't progress with practice - it's your lacking or insufficient intelligence/talents/learning capabilities/memory capacity/... which prevent you to understand the core "atomic" concepts, operate them and then move forward and accumulate and move up and up and up - it's not the "hard problem of language, art, vision".
The random parts in all of the above, or the "irrational" ones - they are the easiest ones - just pick a random one among a set of possibilities.
If you can answer the question what exactly is hard etc. and go deeper and deeper - the problem is solved. If you can't and can't ask further - then you apparently don't understand what you're talking about and probably lack the appropriate "instruction sets" of the mind.
All kind of processings are available, computing power for the "normally" funded institutes and companies is excessive. Guys, it is really excessive!
People don't know how to utilize this monstrous power - except IBM with "Watson", maybe... :)
AGI on a PC
My ambitious aim is making AGI running on an average nowadays desktop PC/laptop, even on a 6 or even 7-year old Core 2 Duo PC with just 3 GB RAM in 32-bit mode and a mid-range 2007-2008 GPGPU-enabled GPU, and one or more web cameras and microphones and access to the Internet. Of course, I realize I might be too optimistic, but in my less optimistic predictions a mid-range 4-core 2013's CPU with 32 GB RAM, say Core i5 4570 or Ahtlon FX 6300 and a contemporary high-end GPU should make it anyways.
Sure, many believe that PFLOPS or whatever are required, but have no real explanation why they really need it, besides some super inefficient machine learning experiments that shoot flies with hydrogenic bombs or some nonsense estimations of the number of neurons and stuff like that.
Something else that comes to my mind - super human processing. You don't need to render photoreallistic graphics in 4K in 60 fps for a human-level intelligence - humans are terrible in rendering anything. Computer graphics has been a super human activity since its birth, it just goes further and further superhuman - most people could hardly draw a decent cube in perspective.
Human passive vision is of course more powerful, though. Humans do notice when something is not photorealistic, appers "wrong" - wrong illumination physics, shadows' directions, reflections etc., but IMHO that suggests how trivial and obvius it really is - more on this later.
Most tasks that are solved by supercomputer or any computers today are intrinsically superhuman, the real problems of versatile intelligence in my opinion are trivial and easy, once they are approached right - humans do not have PFLOPS.
All these fake "PFLOPS" inside brain are eventually reduced to a few bytes of intentional output (and the intentions are essentially a few bytes long) - because most of these "PFLOPS" and "PBYTES" of "data" inside the brain doesn't really matter, are not accessed, are not really data (not like bytes in a general-purpose memory), are not required in other conditions (a thinking machines doesn't need to balance 600 muscles) and are "there" due to inefficient design and lots of useless "recalculations" each time - there wasn't a better way to do it with proteins.
Surely the 2007-2008 machine won't see the world in real time 60 fps stereoscopic Full HD 1920x1080, but yet I don't see any computational reasons why it won't see clearly and smoothly in real time at 15 fps at 160x120 and doing a lot of other things - using the CPU only - at least for some domains/visual cases, and in higher resolution and higher framerate for other domains and cases or if a higher load is allowed. Of course for some hard domains or cases it might be just 1 fps at 160x120 or 0.05 fps at 100x60 or even 0.001 fps for whatever resolution, when it has to decide something important in 1000 seconds.
Similarly, when doing some tasks which for humans require heavy use of vision, due to the super minimalistic amount of human memory of all kinds (don't listen to the nonsenses of the petabytes of human brain, when you can hardly remember 50 or 100 lines of trivial code, or 10-figure telephone number), that same aging computer could easily "see" and change the world at the equivallent of 1000 or 10000 or even 100000 fps (instead of just 5 or 1 or 0.1 fps for humans), because it can focus exactly where it should, find and see exactly and directly the item that it cares for, take it, do what it wants to do etc., while using just a few instructions in a few nanoseconds or microseconds.
It doesn't have to make clumsy saccades with the eyes, blinking, visually locating and clicking on icons, moving the slow hands and fingers, pressing Ctrl-Space etc., then waiting a list to appear, then seeing the items on the list - deciding should one scroll down or up - moving the mouse, or putting the finger on the wheel and moving up-or down, then clicking, seeing the change, reading it etc.
Humans do so many and so slow and useless cognitively "sophisticated" operations of vision, reading, character recognition, muscular coordination, various kinds of memory recall and executive functions, because they cannot optimize these operations by using more efficient shortcuts. Brain is so clumsy that always when it has to deal with such symbolic data, it's bound to pass through slow operations and long hierarchical memory calls and muscular transactions, even for these so dumb and "mechanical"* operations.
* Regarding the philosophical semantics of mechanical - that's a topic of its own, I mark it here.
Indeed, that reminds me a story that I accidently recalled yesterday, while searching for something else in old archives of mine. It was an excerpt from an early book on AI by ~Donald Fink (Доналд Финк), first published in 1966. By the way, the book mentions the IBM's early "Watson" and "Deep Blue" - IBM 7094 playing checkers. That's not the story I meant now, though.
My point is about learning and one species of wasps that stings crickets, lays her eggs in its still living pray's body, then finds a hole in the ground to put it inside, leaves it near the front of the hole, enters inside the hole to see is it safe, then returns out and drags the poor cricket inside the hole.
If the cricket was moved by the experimenter, while the wasp was inside the hole, the wasp would always drag it only to the entrance, leave it there and then go and check the hole again. And again, and again.
That's the same what the "amazingly adaptive" brain does for many tasks. No matter how many times you look at the code - you will not remember it by heart and you will always have to do a lot of laborious and otherwise useless operations in order to recall the details, if you do programming manually.
To recapitulate again: humans need these "sophisticated" processes for the simple tasks due to the non-sophisticated and quasy general purpose brain. It has general (versatile, multi-modality) input and general (covering the target 3D-space) output - actuators, and somewhat general "built-in" sound output (general enough to allow discrimnation of sounds), and it also does general prediction/compression, general comparison/discriminatin/classification and, in general - "general generalization" - the best things that it does.
However the processing if the data and the optimization of the processes, load balancing and others are not that general as they are for example in a general-purpose computer, and the evidences show that there are low-level "modules" - the expressions of genetic or epigenetic differences - which make some people talented for data in some modalities, while others are not. And in general, humans cannotlearn and progress in all modalities, beyond basic and poor levels.
Brain have versatility bottlenecks, it is quasy/paradoxically/pseudo Universal without external tools and engines, and hits silly memory "walls", just like the wasp.
One of the elegant points in AGI is that it can adjust its resolution and span of search and understanding according to the current immediate goals and the available resources. Higher resolution is just a quantitative problem - it is not the substantial problem, a versatile intelligence with more resources will work faster and reach further.
An AGI is needed that work at some meaningful and general enough resolution, the rest is just an upgrade of the hardware - which is excessively fast.
At least in my architecture AGI intrinsically works in constantly varying and adjustable resolution of perception, causality-control and attention span, that is varied both subjectively by the machine itself and objectivly by the specific sensory data and records that it encounters or recalls or searches etc.
As of the excessive resources - did you know, the other competitors in this race have now tenths of millions of dollars investors' funding, some have hundreds of millions and potentially billions?
One thing that has been severely slowing down everything of the "competition" though, and that prevented AGI to come to existence years ago is the multi-inter domain blindness and the inappropriate "division of labour". Somebody may understand machine learning, the best new techniques, or solve very high abstraction differential equations, but being unable to play a little blues on the guitar, or make smooth dance moves or being unskilled in drawing, so unskilled that a 5 or 7-year old talented one is better. What's the problem with your brain, men??? That's not versatile intelligence.
My roadmap is several months back in my own last year's schedule, due to nonsense distractions and wrong decisions/bad efficiency in some tasks, and also because some meaningful "other things", such as some major social science works (a big and funny one in Bulgarian, that's unpublished yet) and music making, which was inspired by the process of writing that work and is part of it.
It could have been more efficient, but it provides some data for introspective and creativity processes analysis.
The "roadmap" is also a flexible thing, some shortcuts or alternives or already-made tools are constantly being discovered, tested, experimented, adopted; or some sub-projects get postponed or receive more focus than expected.
Versatility gives advantages, such as huge sources of ideas, I try to see clues everywhere, but it has also by-effects such as distractability even when doing some "meaningful" things, since you seem to be able to improve your skill and understanding in every direction. The latter effect causes also "livelock" - too many tasks, all of which are doable and part of the whole, so you want to solve and understand all, which causes the prioritizing system to suffer. Prioritizing is much simpler if one has one sharp talent and is poor in the others potentially distracting fields.
If I could be a leader of a multi- and inter-disciplinary team, it would have been different, too, but that would be in some other universe.
For example, lately I've been working on a new satirical absurd story, it's genre is probably of a novella, because it has too many words for a short story. It's both very serious and deep, containing audacious messages against social nonsenses and hypocrisy, and very laughable, the main character is a 7-years old boy.
Besides its artistic and literature/stylistic aspects and humour, its topics and the process of its creation depict/are related/are used for analysis to/of creativity, linguistics, socio-linguistic and language development (why and how Bulgarian lexicon has changed due to specific international and social-ranking related "natural" laws of language development, an old interest of mine) and various general nonsenses of human societies and norms, both world-wide and some local.
This work also has also funny illustrations with their specific style appropriate for the story, which are drawn and painted by myself.
By the way - drawing, painting and writing are some of all things that I need to get done much faster I want to do them in a blink of the eye in order to be able to realize all creative projects that I collect in my "drawer" through the years, and I am working on this - it is in the roadmap.
Indeed, there's one very simple insight, that I recall after telling the above which versatility suggests... That I'll save for the introduction of some demo later.
You probably know about "CALO", I have my own "CALO", with a very long history of its conception and first incarnation (the "comprehension assistant/intelligent dictionary Smarty) - research/cognitive/everything accelerator, yet far from the shape that I want it to be.
I am using some little "embryos" and experiments of those old ideas since many years, however the implementation started to grow like a bamboo and improving my productivity in late months. It has been in different environments - there are prototypes in Java and in C#, there are tools in C++ as well. It will shine when I complete some of the Virtual-Machine-related milestones, which actually go much further than just a VM.
There are also general software R&D decisions and integration stages, that have to get completely sorted out before it becomes a real beast, within my so called "software infrastructure".
I've been also thinking on and improving my overall methodology of working, for example I like some "low-tech" physical/mechanical tools.
Could I be more specific?
Should I do it in an "informal essay"... Some ideas are so obvious and trivial (if you do understand them), but the best presentation require context. Anyways, for now I consier the best way to protect them is to keep them private, and first show the outcomes of the application of my ideas, the "by-effects".
One of them is my improved productivity. Then I may show complete systems that can be protected at least as public evidences and can't be just taken as "anonymous ideas from "informal essays""
There are some direction-works and digests/compilation of discussions of mine with some added notes, which I've been writing for publication, and one interview which got too big, I've been deciding to withdraw them for now, the "interview" may appear later, as well as some social science publications.
For example some simple novel insights - another elegant point of view, a definition of it, that I saw back in late 2012, which was an answer to an article related to Schmidhuber's creativity works, which I claimed that is connected to my earlier claims in the early 2000s works, regarding creativity and compressions (browse late 2012 posts here).
I saw flaws in those general hypotheses, though, and wrote a lot of ideas down, but it grew too big, and I left the paper in the "drawer"...
A working title of this specific work is ("encoded"): L.P.O.T.I.A.C.A.O.
I want to push the software infrastructure to the key points of integration though, and with its support produce some related data and software in an easy and smooth way, and also generalize further those ideas, after closing the sensory-motor feedback. I can do it "the hard conventional way" to some degree, it's possible to post some preliminary version or the ideas anyway, I'll see that later.
Many basic-level modules for my software infrastructure are done or almost done or in use (but not yet very convenient), or done in one way and need to get more modular or done in another environment; or were conceived long ago, but waiting for their "time-slot" to be allocated by my "operating system dispatcher".
Some tasks/experiments/directions for exploration and experimentation and implementation are conceived, sketched shortly, designed and scheduled sooner or longer ago, some more than a decade ago, waiting for the supporting technologies to be fully developed, in order to allow their full realization to happen more easily.
This is one of the phenomenons that postpones some technologies - I can develop them and test some hypotheses posted "conventionally", however I know that I could create them and also 10 or 50 more other projects elegantly in a breeze once I have developed the more general technology, which would take its time, though.
Some aspects need a few more components to get finished, including some of the above or assisting some of them, in order to start serving their full purpose, such as my custom Virtual Machine, which is more than yet another VM, its creation has also other research and engineering goals, experiments, paths and challenges.
Overall - a whole lot of things are "one hand away" and "almost done" or done, but not yet well integrated, but they soon will be, after collecting the appropriate amount of attention span.
Some of the implementations immediately increase my productivity, and decrease distraction and context-switching overhead, which is significant in my manner of work and my situation.
Human-Computer Interaction is obviously an important direction, as I've mentioned 6 years ago, however it is always connected to more general-purpose directions which are there to simplify the HCI development, and HCI is also there to simplify them.
After completing what I've been started to the milestone points, I forsee that it is possible to have a significant boost in productivity in all domains where I operate, coming possibly in a few months. That means for example (some items which come to my mind immediately, the list is not complete): all creative arts in all sensory-motor modalities (from simple drawing to complete movie making - from basic editing to visual effects synthesis and compositing, to music composing, arranging and performing; to creative writing and editing; everything conceivable), social sciences, linguistics, socio-linguistics research, language learning/acquisition, comparative linguistics; general education; NLP, NLG; intelligent and more efficient search; faster input of any kind of data, faster comprehension of everything, faster operation with anything; philosophical research, theoretical neuroscience - philosophical-cognitive-psychological-... connected with general theory of intelligence; general research of anything; and of course - tremendous speed-up in general software design and engineering in any computer language and environment - that's something where I'm building up a HUGE boost, which is critical for my overall software infrastructure.
I forsee also that soon after these boosts, having the technologies that I need already available, I will be able to implement the first breakthroughs in the AGI prototyping process, my first complete "embryo" of a universal human-level and human-like thinking machine, a versatile-limitless self-improver.
Sure, one cannot predict all possible distractions and tactical adjustments, so it may be 6-months or 12-months from now, or it may be more in some worse case scenario, but it certainly is approaching.
If I don't manage - the competition may do, and it is not only in the companies and rich research groups in the USA, which are manifesting themselves as working on a thinking machine.
Some of my software-engineering projects have competition in apparently "ordinary" computer-science and IT fields, but as a by-effect of the common multi-inter domain blindness, many people from these more "engineering"-like areas whose work is related to Artificial General Intelligence do not realize that, and cannot see the big picture where their developments fit or could fit. Not yet, at least.
And let me finish this "exercise in English and writing", with another little insight:
Everything is about AGI. Every second of experience, every sensory record and specific data and structure from every scientific, engineering, philosophical, artistic, linguistic, social sciences, sports, daily life or whatever other domain.
EVERYTHING. As long as you do observe, really understand it and can fit each of these little pieces together in the multi-dimensional puzzle in the big multi-dimensional picture. It requires that one can see all - from the tiniest pixels and little details at the closest possible distance, to the overall look at different longer distances and from different angles, that expose all of the orthogonal dimensions.
...To be continued...
 Thanks to V. who has notified me for the existence of that, yet another, "ground-breaking" book.
 Will tell later what I mean with this note.
Thursday, January 9, 2014
Недостатъци на гласуването с харесвам/нехаресвам в Уеб 2.0 и социалните медии и различни дефекти в системите за обществено оценяване и класиране | In Bulgarian - Issues with Like-Dislike Voting Ranking Systems - a translation from the original work in English
Недостатъци на гласуването с харесвам/нехаресвам в Уеб 2.0 и социалните медии и различни дефекти в системите за обществено оценяване и класиране (pdf)
-- Объркани и мъгливи замисъл и мерки - копира механизми за натрупване и заличаване на подробности от мозъка
-- Психология на тълпата – опорочени и замърсени обществени предпочитания и препоръки
Във Фейсбук, Ютюб, телевизията, Туитър и пр...
Дата на публикуване на оригинала на английски: 23/7/2012 в "Todor Arnaudov's Researchers":
Превод от английски: Тодор Арнаудов: 25/10/2013 + малки редакции
Публикувано на български: 9/1/2014 в „Изследванията на Тодор Арнаудов“
Ключови думи: социология, общество, вирусно разпространение, Уеб 2.0, Web 2.0, телевизия, рейтинги, харесвания, социални медии, социални мрежи, "лайкове", likes, dislikes, ютюб, оценяване на съдържанието.
Tuesday, January 7, 2014
Why can't computer understand me..
On our best behaviour - Hector J. Levesque
I'd generalize what the author tries to say: yes, the machine does need *imagination* (sensory memories and spaces, see many, for example ) and to incrementally learn and play with it. That subsystem is required!* I do agree also that many AI-ers, especially in NLP community didn't understand or accept that, this paper suggests that there's no common progress in the acceptance.
Some of the reasons are simply the academic and people's urge to produce papers and "results" quick, to demonstrate that you "do something". Papers, papers, papers - who cares about the real progress.
I guess some have understood that very well, the "common sense" problem is cited from decades, it is obvious, yet in the NLP they've been insisting to push a bunch of words with no relation to the real world and then ask "why it can't do proper word-sense disambiguation, given only a corpus?"
Also, IMO it's the researchers in NLP etc. who don't understand language understanding , rather than the poor computers, which, as many AI-use to say "do what they are told to".
Language teaching of a machine should be done incrementally with sensory-motor feedback and interaction, like teaching a child and not in sensory-less batch mode with a huge corpus with zero interaction, no coordinate space, just a bunch of words, because:
Todor: Natural language is a hierarchical redirection/abstraction/generalization/compression of sequences of multi-modal sensory inputs and motor outputs, and records and predictions for both.
A mere corpus has no imagination, even if it's 1 Terrawords. A small corpus, built by systematic interaction and mapped to sensory-motor system is intelligent and can explore and learn further on its own. Such as - the toddlers.
I support also the criticism of the Turing Test. Right, it is a test for fooling people - not a test of intelligence, and as I myself claimed back in 2001 (see  "Човекът и мислещата машина..." below) - a true, honest and too smart or quick AGI will quickly FAIL the test, because its output is too complex, too quick, too deep, she doesn't have some required memories (shuld lie for her childhood etc.) etc.
Finally, I will save some of my thoughts from myself...
Credits to Aaron for sharing the link!
* More about that when I complete the first milestones of mine...
See also: (And many others...)
 Faults in Turing Test and Lovelace Test. Introduction of Educational Test (2007)
 What's wrong with Natural Language Processing? Part 2. Static, Specific, High-level, Not-evolving... (2009)
 What's wrong with Natural Language Processing? Part I (shorter) (2009)
 Part 3: The NLP Researchers cannot understand language. Computers could. Speech recognition plateau, or What's wrong with Natural Language Processing? (2010)
 Анализ на смисъла на изречение въз основа на базата знания на действаща мислеща машина. Мисли за смисъла и изкуствената мисъл (2004)
 Човекът и мислещата машина. Анализ на възможността (...), 2001 г.
 Embodiment is just coordinate spaces, interactivity and modalities - not a mystery (2011)
Monday, December 16, 2013
Saturday, December 14, 2013
Повече инфо: http://twenkid.com/software/toshko2/
Tuesday, December 10, 2013
Entropica's Equation of Intelligence - a Discussion and Criticism | "CaaS" Intelligenent OS Operating Systems| Интелигентните операционни системи и Ентропика - коментари
Entropica - F = T delta S ... [Български - виж по-долу]
"higher freedom of action"
A publication on that topic appeared earlier this year. Now that I thought again about it (thanks to S. for sharing the link to the talk), at first it sounds to me related (connected, a by-effect of) to generalizations such as ~
Mind [general intelligence, versatile intelligence] aims at predicting the future with ever higher resolution of perception and causality/control, in an ever larger span, covering all possible dimensions (possibilities for expansion and increase of resolution, such as time, space; also subdividing into subspaces, new dimensions).
[Find my "Teenage Theory of Universe and Mind" first published in the early 2000s. See also: http://artificial-mind.blogspot.com/2014/03/whats-going-on-everything-predictions.html ]
Of course, the resolution and span are trade-offs** when possessing fixed cognitive resources. Higher span/area/time may be predicted (controlled, manipulated) with lower absolute resolution than a smaller area. But there's an aim at increasing the resolution, compared to previous states (capabilities), and increasing the cognitive capacity as well.
In one word: progress.
Others from my school of thought have similiar assumptions, in one way or another.
Prediction to higher ranges is achieved by generalization, which is compression - reduction of the computational/memory requirements (in some regard, that's also a trade-off).
Edit: 30/12/2013:"Control" (causality, government) in my terms is to be able to predict what's gonna happen, both passively and actively (to cause the future, to be aware of what you are going to do). The increase in the resolution and span allows also the achievement of higher freedom of action.
It's achieved also by search and exploration, sometimes without compression. For example some technologies are neither more general, nor more sophisticated, they have to be found by doing research: systematic or random exploration of different possibilities, in order to put a particular one into consideration and check it/verify it. The research could be either/both empirical or/and theoretical. The limited capacity of the focus/working memory/processing (in any sense) requires that a "spotlight" scans the enrivonment - any space, having any kind of coordinates.
Some problems can be solved only empirically, by an extensive sequence of probes, because:
- the internal design is hidden
- the observable behavior doesn't tell enough
- the observable behavior tells too much and the observer can't digest it
- it's a "piecewise" defined function, discontinuous and too complex etc.
- some emergent models require simulation
Sometimes the spotlight may miss to find something that's otherwise adjacent.
The more advanced understanding and technologies open new horizons for exploration. The latter couldn't be explored before the preceding technological advances, which help in two directions:
- "hiding the complexity" accumulated so far, that is
- reducing the number of possibilities for action, the directions - the number of case of what to do, ignoring a lot of possibilities as useless (rules about what not to do); what to research, what to skip etc. (dead-end technologies, inefficient directions etc.; the best practices, the most optimal approaches with given tools etc. - when found, they are not searched in the free space, but taken as trajectories)
- now the technology (lower levels of processing) do the job automatically, cognition works in a different domain, using different concepts
- allow to switch the focus to something else.
|4K or 8K video cameras and displays, higher resolution clocks and nano- or femto-second photography and video||Miniaturization in electronics, nanotechnologies, operation on single-atoms||Telescopes seeing further away; detectors of single elementary particles.|
I also agree that intelligence is trivial, see my discussions on "multi-domain blindness" here and on the AGIRI list. Elementary activities and skills are labeled as "hard" or "difficult" by those, who don't understand them, most people have intrinsic cognitive inability to learn and thus understand many otherwise elementary cognitive skills. In addition, the general limitations of human speed of data otuput make expression of the knowledge (and turning it into a thinking machine) slow. There must be an efficient boot process to allow the machine to speed up the process (something I've been working on, but that's another topic).
However, the increase of possibilities ("the freedom of future of action") without additional constraints sounds to me overfitted.
I think it's rather increase of the freedom of actions from particular class of actions, given particular constraints - not all possible actions. Most actions are useless and some are the reverse of the previous action. It's true that the system should try to keep going (don't stall) if it's supposed to progress... (See also a comment of mine in AGIRI list lately.)
The decrease of possibilities of action is related to the danger of reaching [non-desirable] statical stability, where the system would "stall" and get blocked.
That reminds me of the dialectical materialists and their "eternal motion" (dynamics) and view of everything as "a form of motion". Recently somebody sent me out of any context an Engels's cite ~ "the presence of contradiction is a sign of truth, and the lack of contradiction is a sign of confusion". In my opinion what Engels and dialectical materialists meant could be quite different than the superficial literal interpretation.
A "truth" to them is that "everything is in motion", contradiction is a sign not of "truth", but of the presence of potential for change. If there are no contradictions and the thinker doesn't search for such, then there won't be a potential for change, "no freedom of actions", the system stalled. Of course, if there is no "motion" (dynamics), the state of affairs won't change.
The process of hapenning of an event is a "change", therefore if there's no "motion" (dynamics, "freedom of actions")), there won't be change and no progress. If there's motion, but no "freedom", i.e. the possibilities are considered to be "too limited", then there will be repetition of what's assumed as "past", "already given", that's "no freedom".
However, when intelligence reaches to states which work "right", support given prediction and causality control at given level of abstraction, resolution, field or whatever, these states do freeze, as long as they still work (better than other possible for the systems actions)*.
[Edit+: 11/12/2013, 1:26 In fact many intellectual goals require finding one single right solution and eliminating the freedom of possibilities down to only one, given particular resolution limitations, such as given correct set of physical or chemical laws for the available observations. Further discoveries improve the resolution and span of application, but at the lower resolution and in the old settings the old theories are still right, otherwise they wouldn't have become valid in their time.
The standard example - Newtonian mechanics is precise for "slow" speeds and low masses, and in fact the time is practically absolute, it appears so, given the resolution is low enough, compared to the speed. People believed that it was absolute, because the observations suggested so. ] Also, one part of human and animal behavior which impacts our intelligence is one that reaches to, I call it, "physical rewards".
The goals of the "physical", or maybe a better term is "lower emotional" reward system are fixed, they are given "pleasures" and the system going after such rewars doesn't aim at progress. It finds a place of pleasure, it stays there and the static stability in this domain is fine for it, it's even desirable.
A system driven by such rewards go to explore the world only in order to find coordinates with static stabilit. From plenty of food for an animal, to secure job for the general population. "Secure job" can be regarded as higher freedom of action (steady income, you can do things using social resources), however it's lower freedom of other actions, such as - conformism and being obedient from so and so in the morning to so and so every single day, being fixed at particular position, etc.
Also, that "freedom" of action, if the actions are generalized, is not more: what most people do with their money is buy things. If an ordinary man was given 1 million dollars, he wouldn't become smarter, start playing the piano, directing movies or designing new computer systems (first studying engineering, mathematics...).
He would just go and buy an expensive car, a new house; he may also go to the Bahamas. Cognitively it's not more freedom, unless the variety of input (the views of Bahamas, more people will get interested in his person etc...), the lenght of the trajectory that he can cross (can afford a lot of travelling), the amount of goods that one can buy (and can cause others to serve him) etc. are considered as the amount of "freedom of action". It is an increase of the freedom of action, but it's not an increase of the cognitive freedom of action, it's not a cognitive progress.
Dopamine levels and Novelty seeking It is related to the so-called reward system in the brain, and the theoretical "lower emotional/physical" reward system in my theory. The release of dopamine conditions, associates certain states of mind at the moment of release to "good", "pleasure", "desirable" and the agent starts aiming at those states.
Playing computer games, hazard games, (part of) sexual activities. Dopamine also promotes the so called novelty seeking, getting quickly bored, and in clinically significant extents it reaches manic disorder. Excessive novelty seeking is related also to Attention Defficit Disorder, ADD, and the aim at "increasing of the amount of freedom of action", if taken alone as only goal, ruins the lives of the ones who suffer from this condition.
In order to have steady cognitive progress, there must be generalization and stabilisation at certain points which decrease the freedom of actions and simplify life.In more concrete terms from the Entropica - for example, in the examples of clips with the balls and the balance keeping, the amount what's maximized is the possibilities of the actuators to impact in predictable (intentional, rational) way the target entities.
In case the object fall down and the agent is not capable to pick it up, in this particular setting it will reach to a static state - the poor agent won't have anything more to do. Also, if it starts from a state where the object is on the balancing bar, and all that this agent could do is "balancing", moving right-left etc. - the freedom here is the amount of force that it gives, determining whether the ball would fall or stay.
If the agent is able to do many iterations in genethic algorithms matter, it will discover that if doesn't balance it right, the ball will fall down, stop and it won't "feel" it anymore, so it won't have anything more to do. If it's aim is to do something, and it has to have "a ball on the bar" etc. in order to do, the next time the agent will try to keep the ball up, in order not to "lose its job" and get "bored".
However, if the agent is capable to pick the ball, it could throw it up and start balancing again (more freedoms), but if it could do this, throwing it around (like in the game of Arkanoid for example) looks like more "freedom of action" - the space of the coordinates of the trajectory of the ball would be bigger, and it won't matter what particular actions the balancing agent has choosen - it would be enough to kick it around and that would rather turn into "random", "lacking order", "lacking rationality"... (While the "rationality" would be - kick the ball around, that's fun, and it is rational if that's what the agent wants to accomplish.)
As is shown in the examples - there are always constraints, it's not just "more freedom". The big ball/disk that pushes a small ball to the other one etc. in a limited space. If there are no constraints, randomness is the "biggest freedom". The freedom of intelligence and in creativity requires constraints.
Without constraints - randomness is sufficiently "inventive". However in the same time it may seem not really "free", because there's no "intentional" (non-observable, "hidden variables") part. (Well, some particles seem to posses "hidden" variables to the observers, i.e. been unpredictable from the possible observations only, regarding quantum mechanics.) The author Alex Wissner-Gross' cite of Feynmann in the end of the talk, regarding the most basic physical laws -- the objects close to each other repell, and the ones far from each other - attract. In my opinion the above is also related to the dialectical materialists view of "motion".
Right - in order to have a never-ending progress, there should be some basic forces that cause cylic or iterative changes and adjustments. If the world was fixed, it wouldn't have evolved. (That's true, but it's sort of obvious.)
Also as demonstrated above, the "intelligent", evolved parts actually do "freeze" and keep stable, such as living cells or scientifical knowledge, as long as they work better and are "stronger", have a bigger impact, than the more "free" version.
* * See Vladimir Turchin's work on the so called "Metasystem Transition" and Boris Kazachenko's "Meta-evolution". ** Edit: 11-12-2013, 3:49 - In terms of phsysics, the trade-off "precision vs scope" (see B. Kazachenko), big span with low detail vs small span with high detail seems related to the trade-off in quantum uncertainty. http://en.wikipedia.org/wiki/Uncertainty_principle And to all kinds of trade-offs...
Regarding the "CaaS" - Cognition as a Service - http://gigaom.com/2013/12/07/why-cognition-as-a-service-is-the-next-operating-system-battlefield/
Yeah... that's one of my research goals, too, I published these intents in early 2008... http://artificial-mind.blogspot.com/2008/04/research-directions-and-goals-feb-2008.html
I'm working actively on my intelligent low-level infrastructure and the research accelerator, amongst with hundreds of theoretical and practical ideas that I collect and stack for implementation, but there's a lot of "pre-intelligent" hard work. It partially starts to speed-up some daily activities. It's a hard life doing herculean feats on your own, but the warrior is a warrior even if he's alone, it's a time for energetic work, once I have that infrastructure, it will speed up the rest like a rocket. Details - later.
Ентропика и интелигентни операционни системи и - коментари (по-кратко от версията на английски) Примерите от клипчето с топките и с балансирането - увеличава се (запазва) се възможността на актуатора да въздейства по предсказан от него начин върху целевите елементи. Ако предметът падне и агентът не може да го вдигне, след това ще стигне до статично положение и няма да има какво да прави.
Но ако може да го вдигне, няма такова ограничение, тогава като падне, ако няма какво друго да прави, просто ще го повдигне и ще си го подхвърли пак да балансира например. Увеличаване на възможностите има смисъл ако е при дадени ограничения и има целенасочено въздействие, иначе най-много "възможности" имат частици, които се реят в пространството. Но те нямат "възможност", нямат нелинеен вътрешни целеви модул, и всъщност са ограничени. http://gigaom.com/2013/12/07/why-cognition-as-a-service-is-the-next-operating-system-battlefield/ За първия линк, така е, виж надолу "Интелигентна операционна система" http://artificial-mind.blogspot.com/2008/04/research-directions-and-goals-feb-2008.html
Статията за Ентропика излезе по-рано през тази година.
Сега като се замисля, не е толкова далеч от моите обобщения от рода на:
Умът се стреми да предсказва бъдещето (си) с все по-голяма разделителна способност на възприятието и управлението, във все по-голямо обхват - времеви и пространствен.
"Контролът" (управлението) в мои термини е именно да можеш да предсказваш какво ще се случи, пасивно или активно (да го причиняваш), и увеличаващата се разделителна способност са тези "повече възможности".
Също това, че универсалният разум е супер елементарен също го твърдя отдавна: хората са глупави и ограничени (multi-domain blind), всички сфери на познанието и изкуствата са супер елементарни.
увеличаването на възможностите за действие обаче е част от картинката, звучи ми като свръхобобщение. Може да се каже увеличаване на възможностите за [въздейстие] от определен клас, а не на всички. А намаляването на възможностите всъщност е свързано с опасност от стигане до статично равновесие, при което системата "блокира". Напомня на диалектическите материалисти и тяхното "вечно движение" (динамика). Естествено, ако няма "движение" (динамика), нещата няма да се променят, случването на неща е "промяна", следователно ако няма "движение" (динамика, възможности), няма да има "промяна" и няма да им развитие.
Примерите от клипчето с топките и с балансирането - увеличава се (запазва) се възможността на актуатора да въздейства по предсказан от него начин върху целевите елементи. Ако предметът падне и агентът не може да го вдигне, след това ще стигне до статично положение и няма да има какво да прави. Но ако може да го вдигне, няма такова ограничение, тогава като падне, ако няма какво друго да прави, просто ще го повдигне и ще си го подхвърли пак да балансира например.
Увеличаване на възможностите има смисъл ако е при дадени ограничения и има целенасочено въздействие, иначе най-много "възможности" имат частици, които се реят в пространството. Но те нямат "възможност", нямат нелинеен вътрешни целеви модул, и всъщност са ограничени.
//Благодаря на С. за линковете.
Sunday, November 24, 2013
Visual Illusion Interpretation - ambiguous changes may make recognition of the "true" change impossible by available data | Visual Motion Confusions | Spatio-Temporal Resolution Trade-Offs
A comment of mine on an visual illusion:
".....Silencing the Change
In the demo below, the colours of individual dots continue to change the entire time, but when the dots move, those changes are much harder to see - the change is silenced. ..."
Well, first of all I can see the colours changing when the circles rotate... :-P So it doesn't work for me...
But I came up with a general interpretation of motion confusions. In few words it's just processing/ambiguity limitation.
If objects are translating on the retina, there are similar objects in close distance that take each-others place (so it's harder to distinguish them) and changing too much in different ambiguous ways, and besides it happens in the periphery vision which is terrible, the visual processor cannot know for sure what was the "real" source of change.
The only "real" data is the retinotopic luma/chroma values, however it could be interpreted as:
- motion of one "item" from the previous "frame"
- some static items that have changed colours
- "blending" items (the way can be also ambiguous, if change is too fast)
The visual processor can't know for sure what was the "true" change.
In this particular case the speed of rotation is low enough and there aren't clues for "blending" between dots etc., the only "meaningful" transformation that viewers track seems to be rigid rotation.
We see rotation, because that's some of the small differences (motion is slow enough) that can be tracked. But as of colour change, depending on the colour of the dots around, and
Spatio-Temporal Resolution Trade-Offs
From particle physics to video, you can't either move or scan faster and see/measure with the same maximum resolution, using the same hardware.
In general many of these illusions are created with patterns which can't be seen in "normal" environment. There are only special common "objects" that change colour that fast on their own - computer screens and TV sets, and the images generated on them.
By the way, once I complete one of the milestones of the general infrastructure of my "artificial mind" and imagination, I'd be able to test and create a lot of related-stuff in a blaze. The work is going on, but the details are in "low resolution" for now.
See also my interpretation of the Colour Optical Illusions and 3D-reconstruction with Light sources:
PS. Thanks to L.J.
Friday, November 15, 2013
I've posted this comment to an article some time ago: Statement on the Recent TED/Psi/Consciousness Controversy, regarding consciousness and the confusion of people about it.
I've been browsing Ben's web site now and I see that the comment is still "awaiting moderation" or was censured, so I'm posting it here anyway.
I'm not familiar with their work, but I know that consciousness, free will, thinking machines and theoretical investigations on ethics - these are all topics which are "confusing" for average people.
Particular intelligence is required and those who lack it are likely to fail to understand the real meaning of the explanations given on these topics by the people who have that talent. I guess the administrators are amongst the "confused ones", since even some "normal" scientists are deeply confused, because they believe they're clever and are supposed to understand.
Recently I saw a short video by Eric Kandel on consciousness and free will, where he, the expert, tries to explain and convince the laypeople, who seem to believe that are better experts, that "free will" doesn't appear out of nothing, that there are measurable processes which precede the manifestations of awareness and conscious actions etc.
Well - of course?! What else did the people believe? Babies have behavior 2-3 years before what's supposed to be called "awareness" or [the moment where] "consciousness" is "supposed" to manifest itself by using the word "I" or recognizing oneself in the mirror. And those skills don't require "real" qualia. A machine can easily do this.
The people, including the "scientists", seemed to have believed that free will was "magic" which was out of causality, out of physics, given the surprise in the scientific community after the Libet's and others' experiments regarding the delay between the EEG onset of intentional action and the moment of realization of the desire to act.
See: Libet's experiment
Video of Libet's experiment
A recent discussion on consciousness and mind at Blogtalkradio: "Special guests Ben Goertzel and David Pearce join hosts Phil Bowermaster and Stephen Gordon to examine the question of mind uploading." Will we ever upload"