Friday, August 9, 2024

// // Leave a Comment

CogAlg as a "brand" name was created by Todor/The Sacred Computer: Part I, first use in 2010

This is how the "brand" "CogAlg" (the name of one Computer Vision/AGI project) was coined by me in 2010, when I was pushing the author of the "Cognitive Algorithm", one of the projects which repeat fragments of Theory of Universe and Mind. In B.K's case the ideas were expressed in the most obfuscated language and only my superhuman linguistic abilities and understanding allowed me to read through the "meaningless generalities" and abstract concepts mixture, which others saw and were keep seeing years later (maybe also now with the much more evolved state of the so called "write-up" with references etc.).

See a snapshot of that "AGI theory" as it was in Jan 2011:
https://web.archive.org/web/20110223070658/https://knol.google.com/k/intelligence-as-a-cognitive-algorithm#
Original (outdated Google service): https://knol.google.com/k/intelligence-as-a-cognitive-algorithm 

More about the "brand" and more advanced technical/theoretical material and milestones related to this project may be published possibly in following publications - if I find a time spot for this or anyone cares (no one does of course, LOL).

Note also the brand SuperCogAlg which was on hold after a short hacky development in mid 2019 and early 2020. https://github.com/Twenkid/SuperCogAlg

...


Warm-up
Inbox
Todor Arnaudov 
Tue, Dec 14, 2010, 6:39 PM
to Boris


Hi Boris,


Lately I've been spending some time on warming up for the course  [the second iteration of the AGI course] - [I] got some basic insights, realized what "output" in the summer thread meant - now I see there's even more updated knol. "Output" is the output of the input queue, the oldest item, going to next level processing, "next comparison cycle" is applying next kind of comparison formulas with additional M/m derivatives...
This is probably of too low standard yet, but if you care:

>>>


On ambiguity: "output" as future inputs made sense with "next comparison cycle" as next *sampling clock* of inputs and "output" as prediction of this future item... BTW, I don't think pure textual English is practical, unless ambiguity is a filter part of the challenge, I'm drawing diagrams.). It was not clear (to me) also whether Q(i) and Q(t) are the same or separate queues with "i" and "t" - labels for separate queues or indexes for items in one-single queue, also is Q(o) from another FIFO or it is an item from an "ultimate queue" (first one).


Anyway I think there could be separate Q(t) - e.g. having buffered confirmed templates (not in a slower buffer), while Q(i) - strictly the most recent ones. Also it was ambiguous whether "last" in "lt" means "just before input" (wasn't very meaningful, but it's one clock before the input, this is a smallest step of analysis), or it is the oldest one in the time slice for this level.


BTW, Englsih and text are ambiguous and confusing, I'll probably use engineering-style diagrams to illustrate the concepts.

///


BTW, so far I don't see details about sensory-motor search for "locations", which is important. Of course the first form are raw coordinates in sensory matrices, also timestamps (should be sent up through the queues as well) in order to recognize inputs from the same moment with different coordinates and different sensory modalities (aggregation could be done here - lowering precision of time-stamp value).


"Queue item" = input + derivatives (ever longer line) + timestamp(s)


I think it would be useful a dedicated "navigation subsystem" to exist, it may align to the sensory inputs the *motor* outputs and eventually more timestamps (such as delay from previous motion of particular effector) and parameters similar to "head cells" in hippocampus, in order to be able to learn to compensate inputs by correlated motor outputs.


About dealing with vertical lines - it should be about inclusion of the input coordinates into comparison formulas, they have to be passed through the hierarchy with the items (you mention as a mistake in HTM not taking coordinates into consideration) and the coordinate field itself could be processed as an input in comparison.


>As proposed above, the most basic evaluation by feedback is subtraction of higher-level average value (match), multiplied by 
redundancy of the output.


I guess this is a clueue for the answer of the question.: "how would M( lt x Q(o) ): next comparison cycle, be different from M( lt x Q(t) + lt x Q(i) ): past comparison cycle, for the same resolution & power? "


Another ambiguity (to my interpretation) is whether "input" is one iB or it is a line with incrementing coordinates - whole coordinate resolution, or sections. Which one is true may determine what "distance between inputs" mean - it can be temporal depth in the queue or spatial - within the line, because there could be patterns in both directions, and "incrementing" when selecting inputs also can be in both directions, longer spatial sequences within a line of input, or a longer temporal sequence as number of items in the queue above a threshold, or/and both.


Also, when an item or items are selected for "output", to go to higher level queue, what about recording the content of the whole queue for reference? (Eventually some of the records might be flushed). If the higher levels work in parallel, processing all data in one sampling clock as the lowest level - OK, but I guess higher levels are more likely to be slower, I suspect there could be "glitches" and de-sync. And generally - in order to predict you need to know not only the template (the key/clue), but also the "future" - the following inputs.


Starting with separate queue for each coordinate may be too expensive, I think of a thread of aggregation/"resampling" to lower resolution as a start, because this is a simple form of generalization. First pass is about average intensity "brightness" of the whole line, having pattern resolution 1x1 (doesn't care for sub-section). Then sub-section within the line are recognized (using higher degree derivatives, longest sequence without change of sign etc.) and gradually the coordinate and intensity resolution (I think you called it input resolution) of the recognized pattern within sensory field is incremented and more subtle details are taken as clues suggesting existence of the bigger patterns (more "powerful" comparison). While splitting the line (and aggregating with other lines), smaller patterns are recognized and recorded accordingly.


Infants seem to first see big (covering a lot in global coordinate resolution), otherwise low resolution/low frequency patterns, not small points and lines. (However I guess this may be just because data from lower levels gets attenuated too fast, before reaching to processing related to demonstration of understanding and motor reaction). Also the basic meaningful global information from audio input and I suppose what is got first is not what frequency is active, but the intensity of the sound within the whole range - mammals react to loud noises. I suppose this happens even without neocortex, it's part of "restart" of neocortex, though, but auditory system also learns to discriminate frequency with ever higher precision in space and time, starting from just recognition of there is any sound.


I guess initial discrimination is in time - length in "clocks" (in brain I guess ~1/40 s or something) of activation of sound input around (matching template) or above certain intensity throughout the whole frequency range. Then it is in "space" - intensity of say two or N frequency sub-ranges in parallel; then time+space - correlations between multiple frequency sub-ranges in multiple time-ranges, including more correlations (more higher derivatives/parameters/timestamps/"period-stamps") covering larger time-span.


Todor

PS. Regarding the concept of conserved core (I've missed the comment) - sorry but I don't think I don't understand it, I noticed now you've explained more directly in the beginning, but I think it was straightforward anyway - there are no "hidden variables" and ambiguity like in CogAlg.
Read More