Sunday, August 9, 2020

"Symbol Grounding" and Searle's-Analytical philosophy's sophystry - Sophystrosophy

 Comment on the question posted in the Real AGI group:   


Adam Ford @ 
Real AGI: On a scale of 1 to 10, how worried are you about the symbol grounding problem?

In attempts to develop real AGI, can we do without addressing the grounding problem directly (like we didn't understand combustion before we made use of fire), or do we need to address it, and if so why? (...)

https://psychsciencenotes.blogspot.com/2016/02/how-worried-are-you-by-symbol-grounding.html 
There is currently no solution to the problem of endowing a mental representation symbol system with content/meaning/intentionality that doesn't involve that meaning to have come from somewhere else. If the meaning is not intrinsic to the system's form (Bickhard, 2009, calls this being 'internally related') then the mean has to come from something else, but then how did that thing get its meaning, and so on....it quickly becomes turtles all the way down. This means that mental representations cannot do the things they need to do in order to play the role they need to play in our cognitive economy to make us functional, intentional beings and not philosophical zombies.
"On a scale of 1 to 10, how worried are you about the symbol grounding problem?"

TODOR: ZERO. That is not a problem. One of the comments on the linked post states it well: "to symbolize the ground", not to "symbol ground", but IMO it could grow in both direction (given that the "seeder" has both "sky" and "ground"). Also, any representation in a physical world exists as some kind of physical states (at the lowest level of representation), so it's never merely (abstract) "representation", except in a solapsist mind maybe who doesn't realize that data in the computer memory is encoded as "real" low level capacitance or electrical charges; and that the memories of his are also encoded as explicit chemical and physical structures and they "do exist" in that sense also. How they "feel" is another problem of the "qualia".

The "symbol grounding" IMO is felt as a problem for research with multi-interdisciplinary/multi domain blindness and poor introspection, generalisation skills and memory capacity. The question in the quoted post also "segregates" ("are you cognitivist, psychologists") - yes, *because* one has narrow understanding, that's one reason to see it fragmentary. One of the authors of the blog states in her intro: " to most work on cognition. I see dynamical systems / embodied cognition as an alternative to representation"
That's unnecessary division, Embodiment doesn't exclude the representation, the dynamical also have representation - it's just "more dynamical", more "physical" in one sense or another, but as stated it implies that "representation" is only some special kind of "very abstract" representation only (like "notation" on a paper, or "code" - which is abstraction). In order to become real it always have to be converted to the lowest level. As of Searle's Chinese Room, again (I've explained that even as 17 year old) - this is a BSh, a sophistry: humans are also "in a Chinese Room" and zombies in materialistic rendering, and also: what is a "human" and what exactly is "YOU" at all.

In any system there are smaller components/parts which are "less than the whole" (a tautology indeed), neurons, neural circuits, glia, mucles etc. they are not supposed to "understand" language, they also "just do what they are instructed to do" by their physiology, physics, chemistry, and there are some parts which could be interpreted as "just reading the instructions" (whch humans ofteh DO as a whole, too); there's also a global POV where it is so - we do not understand well enough everything. Another mind, observer, summarizes or classifies that what it perceives is "understanding" or whatever. There is a confusion of levels of abstraction: talking about one thing at a higher abstraction (thought, understanding) and then comparing to something which is way more specific ("mere operation with symbols, shuffling cards). In a more general view, that is an example of the worse parts of the "Analytical" or whatever philosophy - "sophystrosophy", aiming at confusing. (See also "counter-factual thinking") The still cited superficial Turing test is also like that, referred in the shared video.

The AI shouldn't "convince" humans that it is a human in order to be considered "intelligent" - it is not a "salesman", a sophist/cheater or something. The true philosophy is searching for reasons and for the truth, not about convincing; the latter is advocacy or sophistry - by definition.
BTW, "Symbol" is a kind of BS term in general. It's the "sign", or if you take the Wiki definition: https://en.wikipedia.org/wiki/Symbol "something which stands for something else" - i.e. a redirection, a pointer. Mind obviously works and initiates its work rather with sensory-motor records, generalisations, patterns, concepts, associations, trains/paths of thought, predictions, which are richer than the "picture" on a key of the keyboard or a road sign etc. The syntax, semantics, pragmatics of the "symbols" is important, not the "symbols" themselves, taken abstractly. "Symbol" alone is a vague "thing" without meaningful content, an element from a set? - too meaningless - a number in a list is a "symbol".