Monday, November 27, 2023

// // Leave a Comment

On Understanding and Calculation, Quantitative and Qualitative Reasoning: Where Calculation begins Comprehension Ceases: Part II

 Comment on:

"Dr. Jeffrey Funk • 2nd•  Technology Consultant

Nobel Laureate Richard Feynman understood the difference between knowing and understanding, which is explained in this five-minute video. Knowing is being able to do calculations that agree with experiments. Understanding is being able to explain the underlying phenomena.

 

As Feynman describes, the Mayans knew positions of the moon and could predict eclipses, but they didn’t understand the reasons for their correct calculations. That understanding did not come until Newton and others explained gravity and its impact on rotating bodies. And the lack of understanding allowed the Mayans to falsely attribute things to gods, and not to physical laws.

 

Many data scientists and other proponents of AI point to knowing, being able to do calculations. Their #algorithms can predict the prices of homes and infections in hospitals, and match applicants with jobs or a camera’s input with a standard image. But they do not understand the why of their calculations or predictions. And when the calculations and predictions are wrong, they don’t know why. And unless they also have an understanding, which some call explainable #AI, the systems may always perform badly. Achieving high precision predictions or matching two things will always require understanding, not just knowing."

https://www.linkedin.com/posts/dr-jeffrey-funk-a979435_algorithms-ai-technology-activity-7133763690717224960-fMbK?utm_source=share&utm_medium=member_desktop

....

 

Todor Arnaudov - Tosh/Twenkid:

This is very similar to Arthur Schopenhauer, early 19th century: "When calculation begins, comprehension ceases.", OTFFR... His PhD thesis more than 200 years ago. A more modern term is "grounding" or "sensory-motor grounding".


" To calculate therefore, is not to understand, and,

in itself, calculation conveys no comprehension of things.

Calculation deals exclusively with abstract conceptions of

magnitudes, whose mutual relations it determines. By it

we never attain the slightest comprehension of a physical

process, for this requires intuitive comprehension of

space-relations, by means of which causes take effect."



See more in the part I from 2014: Where calculations begin, comprehension ceases* - on understanding and superintelligent AGI. Todor's comment in "Strong Artificial Intelligence" at Google+ https://artificial-mind.blogspot.com/2014/08/where-calculations-begin-comprehension.html


Chomsky's views on DL and LLM is similar as well, see MLST Youtube channel's episode "Ghost in the machine", 2023 etc. or even the "classical debate" with Peter Norvig from 2011, "Norvig vs Chomsky", "The Norvig - Chomsky debate": https://norvig.com/chomsky.html


However "explanation" and "understanding" are not explained in Feynman's wordplay either. What is to explain and explain to whom? If the receptor is not "appropriate" to your explanations, you will always fail. (See example videos on Youtube of "Explaining a concept at different levels: a child, a teenager, a college student, a graduate...", such as: Theoretical Physicist Brian Greene Explains Time in 5 Levels of Difficulty | WIRED: https://www.youtube.com/watch?v=TAhbFRMURtg ... The calculation models also "explain" and they eventually map to something different than the plain numbers or "abstract quantities" (unless the evaluator only reads the numbers), but they are too "shallow" (for a decided measurement) or the "reader" doesn't... understand them, she can't make some "other" predictions or make some new conclusions - all that she has expected that she should be able to do "if she did understood", "if it was well explained" - and she can't connect the evidence on her own - as she expected she should be able to do. Even if you "just" calculate the trajectories of the planets, without knowing "the reason", it eventually maps to objects known as "planets" and there are some additional conclusions such as predicting that particular stellar bodies will not collide etc. DL models may "say": "this is so, because: see this chain of vectors, 0.34, 0.23, 0.55, 0.343 ... at level 0.34, 0.344, 0.33 ... it is bigger than ... etc. deconvolve here, convolve here with this data point (they lose the records/path of thought of learning, IMO a better cognitive system remembers more details about the path and the mapping and should/could be able to find it).


There is not so sharp distinction though, because the "calculation"-(quantitative-reasoning)-lovers may object, that the other ones' "explanations", if lacking calculation/math part, are "just qualitative", and qualitative-only theories or positions/statements etc. are supposed to have lower "resolution"/explicitness and usually are/may be not practical or can't be applied in specific scientific methods. They are just "methodological", but not "methods" as explained in one Bulgarian philosophical article by Sava Petrov*.


There should be *both* qualitative and quantitative models and representations, and reasoning is also a kind of "calculation", but the mappings to the "real" data is supposed to be preserved somewhere in order to connect to the "real" world (and to "explain" to the "user" sensory-input compatible modalities).


As the aforementioned physicist Richard Feynman also explains once in a BBC documentary, when he digs into the answer of "Why ice is slippery"(?), the "Why" questions can go deeper and deeper. There could be different depth and precision of "understanding" (or mapping to something else). Also the causality in general can go wider and wider in time, space and resolution up to the whole Universe and its whole universe. There is a limit, e.g. in Karl Friston's FEP/AIF it's "Markov blankets" which are taken as "impenetrable". There's a cut, a resolution of causality-control where the limit is set, and a model/expectations/causes which are accepted as "good enough" and somebody, the observer-evaluator decides, accepts, that "this is explained" or "understood", why when there is less depth, less steps, smaller range etc.: it is considered "not explained". See "Theory of Universe and Mind" by Todor Arnaudov, early works 2001-2004. One particular work with a lot of topics: "Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence (...)", 2004  Translated in English in 2010 in 4 parts. https://artificial-mind.blogspot.com/2010/01/semantic-analysis-of-sentence.html 
https://artificial-mind.blogspot.com/2010/02/causes-and-reasons-for-any-particular.html  https://artificial-mind.blogspot.com/2010/02/motivation-is-dependent-on-local-and.html https://artificial-mind.blogspot.com/2010/02/intelligence-search-for-biggest.html TOUM continues with "Universe and Mind 6" (not published yet).

Compare "Analysis.." and TOUM with the later published theories and ideas, practical work as well (in RL, lately Active Inference, see the post about "Genius" platform from Verses AI), which confirm the claims and reasoning of the former, see:

https://github.com/Twenkid/Theory-of-Universe-and-Mind/


0 коментара: