Monday, November 27, 2023

// // Leave a Comment

On Understanding and Calculation, Quantitative and Qualitative Reasoning: Where Calculation begins Comprehension Ceases: Part II

 Comment on:

"Dr. Jeffrey Funk • 2nd•  Technology Consultant

Nobel Laureate Richard Feynman understood the difference between knowing and understanding, which is explained in this five-minute video. Knowing is being able to do calculations that agree with experiments. Understanding is being able to explain the underlying phenomena.


As Feynman describes, the Mayans knew positions of the moon and could predict eclipses, but they didn’t understand the reasons for their correct calculations. That understanding did not come until Newton and others explained gravity and its impact on rotating bodies. And the lack of understanding allowed the Mayans to falsely attribute things to gods, and not to physical laws.


Many data scientists and other proponents of AI point to knowing, being able to do calculations. Their #algorithms can predict the prices of homes and infections in hospitals, and match applicants with jobs or a camera’s input with a standard image. But they do not understand the why of their calculations or predictions. And when the calculations and predictions are wrong, they don’t know why. And unless they also have an understanding, which some call explainable #AI, the systems may always perform badly. Achieving high precision predictions or matching two things will always require understanding, not just knowing."



Todor Arnaudov - Tosh/Twenkid:

This is very similar to Arthur Schopenhauer, early 19th century: "When calculation begins, comprehension ceases.", OTFFR... His PhD thesis more than 200 years ago. A more modern term is "grounding" or "sensory-motor grounding".

" To calculate therefore, is not to understand, and,

in itself, calculation conveys no comprehension of things.

Calculation deals exclusively with abstract conceptions of

magnitudes, whose mutual relations it determines. By it

we never attain the slightest comprehension of a physical

process, for this requires intuitive comprehension of

space-relations, by means of which causes take effect."

See more in the part I from 2014: Where calculations begin, comprehension ceases* - on understanding and superintelligent AGI. Todor's comment in "Strong Artificial Intelligence" at Google+

Chomsky's views on DL and LLM is similar as well, see MLST Youtube channel's episode "Ghost in the machine", 2023 etc. or even the "classical debate" with Peter Norvig from 2011, "Norvig vs Chomsky", "The Norvig - Chomsky debate":

However "explanation" and "understanding" are not explained in Feynman's wordplay either. What is to explain and explain to whom? If the receptor is not "appropriate" to your explanations, you will always fail. (See example videos on Youtube of "Explaining a concept at different levels: a child, a teenager, a college student, a graduate...", such as: Theoretical Physicist Brian Greene Explains Time in 5 Levels of Difficulty | WIRED: ... The calculation models also "explain" and they eventually map to something different than the plain numbers or "abstract quantities" (unless the evaluator only reads the numbers), but they are too "shallow" (for a decided measurement) or the "reader" doesn't... understand them, she can't make some "other" predictions or make some new conclusions - all that she has expected that she should be able to do "if she did understood", "if it was well explained" - and she can't connect the evidence on her own - as she expected she should be able to do. Even if you "just" calculate the trajectories of the planets, without knowing "the reason", it eventually maps to objects known as "planets" and there are some additional conclusions such as predicting that particular stellar bodies will not collide etc. DL models may "say": "this is so, because: see this chain of vectors, 0.34, 0.23, 0.55, 0.343 ... at level 0.34, 0.344, 0.33 ... it is bigger than ... etc. deconvolve here, convolve here with this data point (they lose the records/path of thought of learning, IMO a better cognitive system remembers more details about the path and the mapping and should/could be able to find it).

There is not so sharp distinction though, because the "calculation"-(quantitative-reasoning)-lovers may object, that the other ones' "explanations", if lacking calculation/math part, are "just qualitative", and qualitative-only theories or positions/statements etc. are supposed to have lower "resolution"/explicitness and usually are/may be not practical or can't be applied in specific scientific methods. They are just "methodological", but not "methods" as explained in one Bulgarian philosophical article by Sava Petrov*.

There should be *both* qualitative and quantitative models and representations, and reasoning is also a kind of "calculation", but the mappings to the "real" data is supposed to be preserved somewhere in order to connect to the "real" world (and to "explain" to the "user" sensory-input compatible modalities).

As the aforementioned physicist Richard Feynman also explains once in a BBC documentary, when he digs into the answer of "Why ice is slippery"(?), the "Why" questions can go deeper and deeper. There could be different depth and precision of "understanding" (or mapping to something else). Also the causality in general can go wider and wider in time, space and resolution up to the whole Universe and its whole universe. There is a limit, e.g. in Karl Friston's FEP/AIF it's "Markov blankets" which are taken as "impenetrable". There's a cut, a resolution of causality-control where the limit is set, and a model/expectations/causes which are accepted as "good enough" and somebody, the observer-evaluator decides, accepts, that "this is explained" or "understood", why when there is less depth, less steps, smaller range etc.: it is considered "not explained". See "Theory of Universe and Mind" by Todor Arnaudov, early works 2001-2004. One particular work with a lot of topics: "Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence (...)", 2004  Translated in English in 2010 in 4 parts. TOUM continues with "Universe and Mind 6" (not published yet).

Compare "Analysis.." and TOUM with the later published theories and ideas, practical work as well (in RL, lately Active Inference, see the post about "Genius" platform from Verses AI), which confirm the claims and reasoning of the former, see:

Read More

Saturday, November 25, 2023

// // Leave a Comment

Genius by Verses AI - Intelligence as a Service for Multiagent Systems with Free Energy Principle/Active inference framework and the Bulgarian Blueprint announcement

Verses AI - the company that develops multiagent systems, inspired by and implementing the Free Energy Principle/Active Inference by Karl Friston, releases a platform called "Genius". FEP/AIF and the related work by colleagues and students of K.F. actually repeat, expand with volume and mathematical notation (and create maps to actual physics, neuroscience and more technical evidence from natural science, medical/neuroscience, physics etc. data), prove and continue the work on the main principles and ideas of my own Theory of Universe and Mind, with the classical pieces of the body of work published between late 2001 and early 2004. You can see that more clearly in the FEP/AIF literature, K.F. participations in podcasts (e.g. MLST) and the Active Inference Institute channel. The matches spring starting from the core of the principles: minimization of the prediction error at all levels of scale and the ubiquitous multiscale, multilevel nested simulation; in FEP/AIF it's "Markov blankets", in TOUM it is called causality-control units, virtual universes, subuniverses, submachines which run in a hierarchical Universal simulator of virtual universe - what both Universe and Mind are (or more strictly: could be represented and encoded as such). On specific comparisons and matches to other works as well see some already published notes:
One article about FEP when I first heard about it in late 2018: Ultimate AI, Free Energy Principle and Predictive Coding vs Todor and CogAlg - Discussion in Montreal.AI forum and Artificial Mind

Look forward for the "Universe and Mind 6" and the book about the Prophets of the Thinking machines, and check the github page for comparisons.  

* PS. Funnily their demo features a drone, one field where I was "professional" in a start-up lately, but I considered it was better to do it as a hobby as of the management there.  See the EZ drone brain experiments series, I don't have time to focus on it lately, but it will continue with ROS line of experiments.

* According to Yahoo finance etc. they were evaluated at about $100M (about 12.11), there's info about some funding round of $3M CAD.
** A few days later it jumped to 132 M, now 139 M:
So my theory is on the rise.. Go, go, go! 

"The Sacred Computer" is lacking that luxury, yet, LOL, or any luxury, more than twenty years after I clearly expressed the direction about what to do in order to build generally intelligent machines. Thinking out loud, on the second thought though, a "consolation" is that if one aims to be young forever, the passing of the years shouldn't matter much. I guess: so far, so good with the goal and keeping within the "setpoint" in my "Forever Young" program.

I'm looking for partners as always:

*** There's one American guy called Bryan Johnson:*, who has similar goals and calls himself "a professional rejuvenation athlete". There are similarities, but some big differences as well, for instance I am "natty" and my "blueprint" which I am improving is very cheap. My diet is diverse  and consists of ordinary food from the supermarket and the grocery store and it is not strictly "clean" or "healthy" (by many standards) or only low glycemic index food. He admits he takes about 100 pills a day, including testosterone replacement (technically he's "on juice", "illegal" among eating a very special low-glycemic diet and applying many other special procedures etc., I only take cheap Magnesium and sometimes I skip (e.g. yesterday).

* Now that I revisit his site, it "welcomes" the visitor with a big ad for his brand of olive oil:
" Extra Virgin Olive Oil is more powerful than resveratrol, NR, cold plunge, sauna and your favorite podcast".  

Well, the "Bulgarian Blueprint" is not published yet and needs to be analyzed, 
 that's a topic for another conversation or videos. Also it is possible that it may work with people with similar genetics, as well as Johnson's program may work for his similar fellow Americans or Anglosaxon-like-origin people. 

One of the differences is that he claims that he aims at being "18 again", while I guess I've been always much younger than that in many aspects and in some aspects I'm more like in my early teens, and in others, e.g. as of endless curiosity, learning and development I am maybe still at 11 or 6 or .... See the definition of "Twenkid", which I coined back in 2008, whopping 15 years ago, which was a continuation of another term: "Yunak", which was coined another 7 years earlier: 

Stay tuned, join, collaborate, like, subscribe, comment, share and donate! (and laugh, of course)

Ring Dips Progression Workout #3 Towards Muscle Up

Edit: A little one about the "20 years" paragraph, 26.11.2023
Read More

Sunday, November 12, 2023

// // Leave a Comment

EZ Drone Experiments #2 - Flying in a warehouse with a depth camera

Gazebo Garden, Ardupilot, Python, Linux (Ubuntu 22.04 in WSL2)

Read More

Monday, November 6, 2023

// // Leave a Comment

Autonomous Drone Brain Experiments: E.Z. 0.001 for Vsy/"Jack of All Trades" AGI infrastructure

Early experiments by "The Sacred Computer" (Свещеният Сметач): to be continued. What E.Z. stands for will be disclosed later. Vsy "Jack of All Trades" is a project for an AGI infrastructure.

I've been studying the drones and robotics simulation domain actively mostly during the summer, currently I'm more busy with more fundamental and abstract research*, but continuing the work on drone and robot simmulations as side and practical project and it is supposed eventually to turn into a ML playground.
Stay tuned for the updates and I may report more details during the SIGI virtual conference, it will be either around the end of 2023 or in 2024. I am looking for partners in this "adventure" (as in all others).
Made with Gazebo, Python, Linux (in Windows WSL2). Join my adventure or invite me for a common project:

Read More