Tuesday, July 20, 2021

// // Leave a Comment

Intel is approaching AGI - Cognitive AI

A recent talk by Intel is revealing that they have a clue about the direction:

SL 5 Gadi Singer - Cognitive AI - Architecting the Next Generation of Machine Intelligence

Some commented that this is a new hype term, however in mid-2000s there was "Cognitive Computing" (it is used here in the blog, too). Hawkins/Numenta were part of that branch, so it is not so new, related to Cognitive Architectures etc.
Read More

Friday, July 16, 2021

// // Leave a Comment

ASTRID - Cognitive Architecture by Mind Construct based on bootstrappng Common Sense knowledge and knowledge graphs

ASTRID, or "Analysis of Systemic Tagging Results in Intelligent Dynamics" is a Cognitive Architecture project of the dutch AGI company "Mind Construct". They express bold statements about their prototype.  Learn more:  https://mindconstruct.com/website/astrid
and the paper: https://mindconstruct.com/files/selflearningsymbolicaicritcalappraisal.pdf

I agree with the claim ot the ASTRID authors that "symbolic" systems can learn. The problem of the systems which don't is that their "symbols" are too narrow and lack the appropriate "connectedness"/scanning/exploring perspective, generality, potential to map to the others, in short: too "flat" and simple.

P.S. However see the article: Neural Networks are Also Symbolic - Conceptual Confusions in ANN-Symbolic Terminology: https://artificial-mind.blogspot.com/2019/04/neural-networks-are-also-symbolic.html

I don't like the misnomer "symbolic". More appropriate terms for "symbolic" AI are logic-based, logical, conceptual (opposed to [purely] "numerical"); discrete? (or more discrete and more precisely addressable than the "subsymbolic"), "structured", "knowledge-based", "taxonomy-ontology-...", more explicit etc. (many other possible attributes).
 
On the other hand some methods which are called "subsymbolic" are more "numerical", but they eventually or at certain levels also become "symbolic" or logical and usually are mapped to some "symbolic" categorization (last layer). 

Both do and should join and the "logical" and "discrete" at the lowest level and in its representation is also "numerical" and "analog" to some extent. It is some digital operation in a computer, when operating on visual, auditory etc. raw sensory data it is numeric etc.




Read More

Sunday, July 11, 2021

// // Leave a Comment

On Free Will as an Ill-Posed Problem | Improperly-posed problem

Comment of mine on a post in Real AGI group.
referring to an article "The clockwork universe: is free will an illusion?"

Todor: Whether or not somebody or something has a "free will" depends on how exactly "free will" is defined, both "free" and "will", also "you". I think all that discussion and the "catastrophic" consequences are largely anglosaxon-centered views or ones belonging to "control freaks" and sound like sophisms. Of course one can never be of full control of "his" choices, you are formed by an endless amount of "external" forces, there are large portions of life where "you" is unconscious, the processes of your orgranism in complete detail depend on everything, it's of your "full" control only if you are God. It's hard to define what "you" is, where exactly the boundary lays, and obviously what "you" can realize and enumerate or deliberately control is almost nothing of the bitrate that describes your body and all processes in maximum resolution. The intentionally muscle-controllable trajectories are of mere bits, while a zillion bits describe just one CELL of the body in any encoding. The body is also a tiny bit of the whole Universe, where a principle of superposition is in effect etc., everything is computed by interacting with everything else.

IMO that is not supposed to cause existential catastrophe unless one is prone for that due to "control-freak-ism" or something - nothing follows from the lack of the "complete control", it's not the end of the world, unless one believes he was god and now he finds that he wasn't.

"They argue that our choices are determined by forces beyond our ultimate control – perhaps even predetermined all the way back to the big bang – and that therefore nobody is ever wholly responsible for their actions."

This is not a new argument.

Ultimate control - the mania of some cultures and tyrants.

However responsibility as localisation of causal forces, given a resolution and method of factorisation, is another question.

An answer from the poster of the link:
A.T.: I don't want to get into the discussion of free will. If you don't think humans have free will, then you won't care that robots with AGI almost certainly don't have free will. Humans may or may not have free will, but robots cannot, as we know the "clockwork" of their operation. Even using a rand() function that uses pseudo-random numbers won't change that fact, even if the seed is altered. That can always be determined so that the deterministic outcome is theoretically known. As I said, I thought it might be a post to pose a question that I have not seen posed this way before.Even using a rand() function that uses pseudo-random numbers won't change that fact, even if the seed is altered. (...)
...


(The following is posted only here)

Todor: You only believe that you know the "clockwork" of their operation (of a machine, AI etc.). In fact you may say the same for anything with a certain resolution. If randomness is the "free will" freature or component then electrons and all physical particles in the quantum model have "free will", which is fine, however therefore everything has that "free will" and defined like that this concept is meaningless, because it applies for everything and clarifies nothing.

The "theoretically known" part is true for everything as well: if you were God, if you could read the content of everything fast enough without changing it etc., then you would know the "clockwork" for everything. "In theory" you could be, and as of computers and robots: they are part of the whole Universe and in interaction with them, if they interact and their behavior, operation, knowledge etc. are impacted by entities or parts with "free will" within Universe, then they also would have that property and their actual "body" extends to everything that impacts them.

Therefore one must first define exactly what "free will" is and what it is not. Whether or not anything has or doesn't have anything depends on the exact definition. Also humans or whatever can have "free will" even if it's considered "deterministic" or predictible, will and free will as I see it is not about being not deterministic, "free" is not about being random (except in these confused believes).

For example see the definition of Hegel, Engels/Marx, thus Dialectical Materialists: they are deterministic, their definition of free will is to act in accordance with the necessity, i.e. to understand what to do, to be conscious of the possibilities and the desired outcome and "the best" way to achieve it etc. and a lack of free will is if the agent "doesn't understand" (but yet that must be precisely defined, otherwise it's generalities and sophisms), thus if your choice is random and you can't explain it you are also not free, but dependent on the "will" of the randomness or "the Fortune" (instead of "your own" also).

Having or not having anything, doesn't imply anything on its own and has no intrinsic ethical consequences by itself; the ethical consequences are of political and ideological origin. "Lack of God" doesn't mean that "everything is permitted" (Dostoyevski sophism), neither if you consider that an ant or a PC or a tree "does not have free will", that consideration on its own does not impliy (or it does not follow from it) that you have or don't have to do anything with it.

Similarly the fact that other humans are supposed to have "a soul" or "free will" and if the agent "believes that" couldn't stop a murderer, a psychopath or a criminal, a warrior/general or a plain soldier, or any "evil one" etc. Respectively, if you like/love animals or even "inanimate objects" - plants; weapons, cars, computers, toys, books, precious memories - you may handle them with care and love, because that's what *you* feel, it's subjective.

The randomness (disconnected from everything, supposedly) for "free will" is actually dependent on the Universe as a whole which only "predicts" the exact values - so that "freedom" is most dependent (of the whole).

"Freedom" in general as some kind of "independence" (or dependence) is within the decided/given framework and resolution.
Read More

Wednesday, July 7, 2021

// // Leave a Comment

Todor's Comments on the Article "AI Is Harder Than We Think: 4 Key Fallacies in AI Research" - no, AGI is actually simpler than it seems and you think

Comment of mine on the article "AI Is Harder Than We Think: 4 Key Fallacies in AI Research" https://singularityhub.com/2021/05/06/to-advance-ai-we-need-to-better-understand-human-intelligence-and-address-these-4-fallacies/

Posted on Real AGI FB group

The suggested fallacies are:

1. Progress in narrow intelligence is progress towards general intelligence
2. What’s easy for humans should be easy for machines
3. Human language can describe machine intelligence
4. Intelligence is all in our heads

(See also the article)

The title reminded me of a conclusion of the "AGI Digest" letters series where after giving the arguments I noted that: "AGI is way simpler than it seems". See the message from 27.4.2012 in "General algorithms or General Programs", find the link to the paper here:

https://artificial-mind.blogspot.com/2017/12/capsnet-capsules-and-CogAlg-3D-reconstruction.html   https://artificial-mind.blogspot.com/2021/01/capsnet-we-can-do-it-with-3d-point-clouds.html.html

 

Summary

⁠In brief: I claim it is the opposite: AI is easier than it seems (if one doesn't unerstand it and confuses herself, it's hard, right). Embodiment is well known and it lays in the reference frames and exploration-causation, stability of the coordinates and shapes and actions, repetitiveness etc. not in the specific "material" substrate of the body. The "easy for humans..." is well known and banal, also the point against machines is funny: in fact humans also can't "apply their knowledge in new settings without training" (see the challenges in the article) etc. IMO progress in "narrow" AI actually is a progress towards AGI and it was so even in the 2000s, as current "narrow AI" ML methods are pretty general and multi-modal and they give instruments to do processes which were attached to "AGI" at least since the early 2000s, such as general prediction and creation, synthesis. Current "narrow AI" does Analysis and Synthesis, but not "generally enough in a big enough and "integrated enough" and "engine-like-running" framework which connects all the branches, modalities and knowledge together, however the branches and "strings" are getting closer. Practically, one can use as many "narrow" NN with whatever glue code and other logic in a system.

Discussion

1. "Progress in narrow intelligence is progress towards general intelligence" [are not progress towards GI] 

— IMO it actually is a progress, because the methods of the "narrow" become more and more general, both in what they solve and in the ambitions of the authors of these solutions. After a problem or a domain is considered "solved" to one degree or another, the intelligent beings direct themselves to another one, or expand the range, or try to generalise their solutions of several problems so far and combine them etc.

One of the introductory lectures in the first university course in AGI back in April 2010, which I authored, was called "Survey of the Classical and Narrow AI: Why it is Limited and Why it Failed [to achieve AGI]?": http://research.twenkid.com/agi/2010/Narrow_AI_Review_Why_Failed_MTR.pdf

 

 

While wrapping up the faults as I saw them, one of the final slides and others in the lecture, matched with one of the main message of the course - hierarchical prediction and generalisation, - suggested that the methods of the advanced "narrow AI" actually converge to the ideas and methods of AGI. Even  image and video compression for example share the core ideas of AGI as a general sensory-motor prediction engine, so MPEG, MPEG2, H264 - these algorithms in fact are "AI". "Motion compensation", the most basic comparison, is related to some of the primary processings in the AGI algorithm CogAlg, all "edge-detections" etc. are something where any algorithm searching for shapes would start or reach in one way or another. Compression - finding matches ("patterns), which is also "optimisation" - reducing space etc.

Two of the concluding slides (translation follows): 




"The winner of DARPA Urban Challenge in 2007 uses a hierarchical control system with multi-layer planing of the motions, a behavior generator, sensory perceptions, modeling of the world and mechatronics".

Points of a short summary, circa early 2010:

What's wrong with NLP? (from articles from 2009) [and "week" AI]: 

* The systems are static, require a lot of manual work and intervention and do not scale 

* Specialized "tricks" instead of universal (geneal purpose) systems 

* Work at a very high symbolic level and lack grounding on primary perceptions and interactions with the environment 

* The neural networks lack a holistic architecture, do not self-organize and are chaotic and heavy. Overall: A good "physics" is lacking, one that would allow creation of an "engine", which to be turned on and then to start working on its own. The systems are instruments and not engines.

Note, 7.2021: The point regarding the NN however can be adjusted:

Many NN can be stacked or connected with anything else in any kind of network or a more complex system - we are not limited to use one or not use any glue code or whatever. The NN and transformers are actually "general" in what they do and are respectively applied for all sensory modalities and also multi-modaly. 

Complete or powerful enough for a complex simulated/real world sensory-motor multi-modal frameworks are not good enough and these algorithms may be not the fastest to find the correlations and have unnecessary brute force search which can be reduced by more clever algorithms (and they should), however these models do find general correlations in input.  

 2. "What’s easy for humans should be easy for machines"

—  Isn't that banal, also it is vague (easy/hard). Actually some of the skills of the 3 or 4 years old are achieved by 3 or 4 years long training in supervised settings: humans do not learn "unsupervised" except basic vision/low level physical and sensual stuff (language learning is supervised as well; reading and writing: even more).

Test people who didn't attend school at all, check how good they are in logic for example, in abstract thinking, in finding the essential features of the objects or concepts etc. Even people who have university degrees could be bad in that, especially BAs.

There are no machine learning models with current technology from the "narrow AI" which are trained for that long yet, an year or years with current compute. We don't know what they could achive, even with todays' resources.

On learning and generalising: "If, for example, they touch a pot on the stove and burn a finger, they’ll understand that the burn was caused by the pot being hot, not by it being round or silver. To humans this is basic common sense, but algorithms have a hard time making causal inferences, especially without a large dataset or in a different context than the one they were trained in."  

That's right about training if you use a very dumb RL algorithm (like the ones which played for 99999 hours in order to learn to play the basic games on Atari 2600), however overall the "hardness" of learning this by a machine is deeply wrong and not aware of what the actual solution could simply be:

"An algorithm" would have sensors for temperature which will detect "pain", caused be excessive heat/temperature, which happened at the moment when the coordinates of the sensor (the finger) matched coordinates within the plate of the stove. Also, it could have infrared sensors or detect the increment of the temperature before touching and detecting that there is a gradient of the measurement. The images of the stove when the finger was away didn't cause pain, only the touch. This is not hard "for an algorithm", it's trivial.

4. Intelligence is all in our heads

— Wasn't that clear at least since 20 years? (for me it was always clear) However, taking into account, that the embodiment can be "simulated", "virtual". The key in embodiment are the sensory matrices, coordinates ("frames of reference" in Hawkins' terms) and the capability to systematically explore: cause and perceive/study the world; the specific expressions of the sensory matrices and coordinates could vary.

3. Human language can describe machine intelligence
"Even “learning” is a misnomer, Mitchell says, because if a machine truly “learned” a new skill, it would be able to apply that skill in different settings" 

+ 1. "a non-language-related skill with no training would signal general intelligence"

— I challenge these "intellectuals": can you make a proper one-hand backhand with a tennis racket with "no training"? (Also how long will you train, especially before delivering a proper over-the-head service with good speed or a backhand, while you are facing back to the net; or a tweener (between the legs shot, especially while running back to the base line etc.)

You're not supposed to need explicit training, right? You did move your hands, arms, wrists, elbows;  legs, feet... You've watched tennis at least once on TV sports new, therefore you should be able to just go and play against Federer and be on par with him, right?. If you can't do that even against a 10-year old player, that means "you can't apply your knowledge in new settings"...

Can you even juggle 3 balls: by "applying your knowledge of physics from school and sense of rhytm from listening to music and dance - even the simplest trick.

Can you play the simplest songs on a piano : by applying your understanding of space and motion of the hand and find the correlations with the pressing of the keys and the sound of each of them etc. - can you do it especially if you lack musical talent.

Well, "therefore you lack GI", given your own definitons... I'm sorry about that... 

(In fact the above is true for many humans; humans really lack "general intelligence" by some of the high-bar definitions which a machine is expected to met before being "recognized") 

...

Слайдът на български (4.2010):

* Какво не е наред в обработката на естествен език [и слабия ИИ]?

●Системите са статични, изискват много ръчна намеса, не се развиват и мащабират.
●Специализирани „трикове“, а не универсални системи.
●Работят на високо символно ниво и нямат основа от първични възприятия и взаимодействия със средата.
●Невронните мрежи нямат цялостна архитектура, не се самоорганизират и са хаотични и тежки.Липсва добра „физика“, която да позволи създаването на „двигател“, който дасе включи и да заработи от самосебе си. Инструменти, а не двигатели.

Read More