Thursday, April 18, 2019

// // Leave a Comment

Collection of Best papers in Computer Science conferences since 1996

An interesting historical resource and one for getting general insights on the trends - good job for the authors of the list. I notice that it's missing the AGI conference though, if it counts as CS.

https://jeffhuang.com/best_paper_awards.html?fbclid=IwAR0Yc_0K689o8OhmfB0oUI5VKcWg6bg3MuG95yHgcyIc8TU5ItkMw4IzKj4

Thanks to the source: https://www.facebook.com/groups/MontrealAI/permalink/631419193986583/
Read More

Sunday, April 14, 2019

// // Leave a Comment

Neural Networks are Also Symbolic - Conceptual Confusions in ANN-Symbolic Terminology

Todor's remark to:
G.H. - On the Nature of Intelligence  on Creative Destruction Lab
Публикуван на 1.11.2018 г.
This talk is from the Creative Destruction Lab's fourth annual conference, "Machine Learning and the Market for Intelligence", hosted at the University of Toronto's Rotman School of Management on October 23, 2018.
https://youtu.be/MhvfhKnEIqM

The Hinton's message that there should be processing at different time scales is correct, but yet known and obvious. In my "school" it's about working at different resolutions of perception and causation-control, it is also in Hawkins's, in Friston's, in Kazachenko's CogAlg.

I'd challenge the introduction in a funny way though, although I realize that the context is in fact of a silly pitching session and it is perhaps with a clear purpose: note the title "Market for Intelligence" and "Management". The message is "don't fund  symbolic methods, fund mine". 

The division symbolic and "vector" seems sharp, but semantically it's defined on confused grounds and confused concepts


With "symbolic" they apparently mean some kind of "simple" ("too obvious" or "fragile") logical programming like "Prolog" or something like that and something with some kind of "short code" or "obvious" and "simple" relations, lacking hidden layers, or shallow etc. that "doesn't work".

While with "non-symbolic" they address something with "a lot" of numerical parameters and calculations which are not obvious and also, likely, are not understood - by the designers as well. Ad-hoc mess for example, and the job of  "understanding" is done by the data themselves and the algorithm.

That doesn't make them "not symbolic" though, even in that messy state they are

Let's investigate one Supervised Tensorflow Convolutional NN or whatever high performance library.

The user Tensorflow code is developed in Python (symbolic formal language).

The core is developed in C/C++?, CUDA/GPU (C/C++) - pretty symbolic, abstract and considered "hard".

Data is represented as numbers (sorry, they are also symbols, reality is turned into numbers by machines and by our mind).

The final classification layer consists of a set of artificial labels - symbols - which lack internal conceptual structure, except for the users - humans, - who at that level operate with "symbols" - abstract classes.

The Mathematical representations of the NN are of course also "symbolic". The automatic differentiation, gradient descent, dot product - these are "symbolic", they rely on "symbolic" abstract language and namely mathematical symbols to express it (at the level of representation of the developers ).

Any developed intelligence, known to us, eventually reaches to symbolic hierarchical representations, not just to a mess of numbers. There's some kind of classification, ranking and digitized/discrete sampling, required to produce distinguished "patterns" and to take definite decisions.

The NN are actually also not just a mess of numbers - there's a strict structure of the way which filter is computed at which layer, what is multiplied by what etc.

"Vectors" are mentioned here. Reality and brain are not vectors. We could represent images, models of reality, using our abstractions of vectors, matrices etc., and if we put them in appropriate machinery etc., it could then reproduce some image/appearance/behavior etc. of reality.

However brain and neurons are not vectors.

Also when talking about "symbols" - let's first define them precisely.

Not only the simplest classical/Boolean logic of "IF A THEN B" is "symbolic"...

What is not "symbolic" in Neural Networks is the raw input, such as images, while the input to "classical" symbolic AI algorithms such as the ones for logical inference in PROLOG or simple NLP using "manual" rules the input is regarded as "symbolic" - text*, not representing full images with dense spatio-temporal correlations etc.

This however doesn't imply that the input can't produce "symbolic" incremental intermediate patterns by clustering etc. (Where "symbolic" is say, an address of an element among a class of possible higher level patterns within given sensory space etc., like in classification - e.g. simple geometric figures such as recognition of a small blob, line, angle, triangle, square, rectangle etc.)

[ * NOTE, 26.4.2023. Also, the above doesn't limit such a "symbolic" input to represent dense vectors and images, just by describing the format and the content of each pixel etc, with a representation of the  structure and proper interpretation, i.e. "serialization" and "deserialization". Compression-Decompression (Re-representation). See an example in "Chairs, buildings, caricatures, ... /AGI Digest 2012" about different level of generalizations and detail in natural language and in other more specific representations:  https://artificial-mind.blogspot.com/2017/12/capsnet-capsules-and-CogAlg-3D-reconstruction.html  ]


* Other more suggestive distinctions

- Sensori-motor grounded and ungrounded cognition/processing/generalization.
- Embodied cognition vs purely abstract ungrounded cognition etc. Al
- Distributed representation vs Fragile highly localized dictionary representation

"Connectionism" is popular, but a "symbolic" (a more interpretable one) can be based on "connections", traversing graphs,  calculations over "layers" etc. and is supposed to be like that - different types of "deep learning".

The introduction of Boris Kazachenko's AGI Cognitive Algorithm emphasizes that the algorithm is "sub-statistical", a non-neuromorphic deep learning, comparison first, and should start from raw sensori data - the symbolic data should comes next. However this is again about the input data.

Its code forms hierarchical patterns having definite traversable and meaningful structures - patterns - with definite variables, which refer to concepts such as corresponding match, gradient, angle, difference, overlap, redundancy, predictive value, deviation to template etc. to real input or to lower or higher level patterns. To me these are "symbols" as well, thus the algorithm is symbolic (as any coded algorithm), while it's input is sub-symbolic, as is required for a sensori-motor grounded AGI algorithm.

See also XAI - explainable, interpretable AI which is aimed at making the NN "more symbolic" and to bridge them. The Swiss DeepCode startup explain their success in the combination of "non-symbolic" Deep Learning and programming-language-like technologies for analysis such as parsing etc. i.e. clearly "symbolic" structures.


Read More

Saturday, April 13, 2019

// // Leave a Comment

DeepCode's Martin Vechev's recent interview

An interview from March 2019 on code synthesis, automatic code reviews and suggestions for improvement etc. and the product they already offer:

https://sifted.eu/articles/ai-is-coming-for-your-coding-job/

Demo for tensorflow suggestions

As of Vechev's claims in the end about what is hard to be automated (maybe in order not to offend the developers too much), and the explanations that sophisticated software such as Word Processors, 25 or 45 million lines of code etc. are not supposed to be coded automatically in another related article in French :"Computer programmers are approaching their end"https://www.lesechos.fr/tech-medias/intelligence-artificielle/informatique-les-codeurs-programment-ils-leur-fin-239772

I challenge some of the claims of difficulty of automation, it could be done with focus and clever meta-design and mapping to sensori-motor spaces and would work incrementally even without NeuralNets and brute force-like search over "everything ever written".
Read More