Tuesday, July 23, 2019

// // Leave a Comment

Screencast Story About the World's First University Course in AGI in 2010 in Plovdiv - Artificial General Intelligence



Виж на български - друга, по-добра версия на разказа.

A short story about the world's first interdisciplinary University course in Artificial General Intelligence (AGI), presented at Plovdiv University in 2010 and 2011.

See also the strange funny personal coincidences between me, the author of the first course, and the author of the MIT's course, starts around 5-th minute.
Read More

Saturday, July 13, 2019

// // Leave a Comment

BAIHUI.AI - нов български стартъп в изкуствения интелект | BAIHUI.AI - New Bulgarian AI Start-up

През юни няколко колеги са създали стартъп за изкуствен интелект с това дръзко българско име с китайско звучене. :) Засега показват, че са успели да подкарат GPT-2 преобразувател (Transformer), обучен върху български текст с 1.5 милиарда променливи, предполагам през Амазонския AWS, не са толкова богати че да имат необходимия хардуер за друго обучение.

Пуснали са и кратка анимация, прилича на StyleGAN (виж www.thispersondoesntexist.com).

И двете системи са с отворен код, първата е от OpenAI, втората е от NVidia.

https://www.linkedin.com/company/baihuiai

BAIHUI.AI is a Bulgarian AI start-up, founded in June 2019. So far they've demonstrated they managed to train and run the big GPT-2 1.5 Billion parameters NLP model on Bulgarian texts, as well as probably StyleGAN (it seems so): thispersondoesntexist.com.
Read More

Sunday, July 7, 2019

// // Leave a Comment

MIT's Interdisciplinary Billion Dollar Computing College - 9-10 years after the interdisciplinary program of Todor at Plovdiv University Etc.

Comments regarding a recent talk from Lex Fridman's AI podcast with Jeff Hawkins from Numenta:

https://youtu.be/-EVqrDlAqYo
Conceptually it seems Hawkins'  approach and ideas still match many of the insights and direction in my "Theory of Universe and Mind"  works from the early 2000s (published before "On Intelligence" in "Sacred Computer" - "Свещеният сметач") and afterwards.

In addition to:  not following the mainstream research which is doing minimal changes which lead  to minimal progress of some benchmarks which is assumed good enough by the mainstream researchers, published in journals and conferences etc. Rather radical jumps are needed and recently "even the godfathers in the field agreed"...

The building of deep structures, the play of resolution of perception and control, coordinate spaces as a basis for AGI  ("reference frames"), attention traversal of different scales ("time scales"  - resolution of perception and causation within the time dimension), introspection as a legitimate method for AGI research, these are my "behavintrospective" studies; that there is no separate training and inference stage as in current NN, it's supposed to be a part of one process. See CogAlg.

One difference though - he dismisses the interdisciplinary research as helpful (although I think they actually do such research). "Human-centered AI" is disliked, because it suggests study of emotions and other human traits which are not needed for the AI, "let's just study the brain etc.".

IMO the interdisciplinary minds see and understand easier shortcuts while others could eventually find these paths by laborious digging and wandering in seas of empirical data and brute-force search.

https://youtu.be/-EVqrDlAqYo?t=5141

~ 1:25 h

"The new steps should be orthogonal ..."  (no little changes "1.1% progress" on standard benchmarks - see Todor's "What's wrong with Natural Language Processing" series)

1:25:41

The Billion dollar computing college of MIT, interdisciplinary, from this fall:
http://news.mit.edu/2018/mit-reshapes-itself-stephen-schwarzman-college-of-computing-1015

https://fortune.com/2018/10/16/mit-college-computing-artificial-intelligence-billion-dollars/


Well: 9-10 years after the course/research direction program that I announced in late 2009, presented in spring 2010 at Plovdiv University, with practically zero funding, besides the opportunity to present it at my University (thanks to Mancho Manev and Plovdiv University).





Read More

Saturday, July 6, 2019

// // Leave a Comment

Shape Bias in CNN for Better Results due to Wrong Texture Bias by Default

In its intro the authors of the paper below explain that it's been a common believe in the CNN community that the ImageNet trained neural networks developed a "shape bias" and stored "shape representations",  they propose a contrary view, that CNN are texture-biased and prove it with experiments:

IMAGENET-TRAINED CNNS ARE BIASED TOWARDS TEXTURE; INCREASING SHAPE BIAS IMPROVES ACCURACY AND ROBUSTNESS

To me that texture-bias has been obvious and obviously wrong. The CNNs recognise texture-features and search correlations between them, otherwise there wouldn't be adversarial hacks with changing a pixel and ruining recognition, it wouldn't need to be trained with so many examples, it would recognize wireframe drawings/sketches as humans do etc. etc. 

The "right" recognition would be robust if the system can do 3D-structure-and-light reconstruction ("reverse graphics"), at best incrementally, see: 


CapsNet, capsules, vision as 3D-reconstruction and re-rendering and mainstream approval of ideas and insights of Boris Kazachenko and Todor Arnaudov, Sunday, December 31, 2017



Colour Optical Illusions are the Effect of the 3D-Reconstruction and Compensation of the Light Source Coordinates and Light Intensity in an Assumed 2D Projection of a 3D Scene, 1.1.2012  

2012, discussions at AGI List:
AGI Digest: Chairs, Caricatures and Object Recognition as 3D Reconstruction






Developmental Approach to Machine Learning, Dec 2018
https://artificial-mind.blogspot.com/2018/12/developmental-approach-to-machine.html



News: Mathematics, Rendering, Art, Drawing, Painting, Visual, Generalizing, Music, Analyzing, Tuesday, September 25, 2012


[Topology, Vector Transformations, Adjacency/Connectedness...]


https://artificial-mind.blogspot.com/2012/09/news-mathematics-rendering-art-drawing.html


"...Vector transformations

In another "unpublished paper" from a few months ago, which would turn into a digest one day eventually (it's a published email discussion), I explained and shared some elegant fundamental AGI operations/generalizations which are based on simple visual 3D transformations. 

"Everything" is a bunch of vector transformations and the core of the general intelligence are the simplest representations of those "visual" representations, which are really simple/basic/general. 

And "visual" in human terms actually means just:

Something that encompasses features and operations in 1D, 2D, 3D and 4D (video) vector (Euclidian) spaces, and the vectors in these dimensions can be of dimensionality usually of up to 4 or 5, such as: //e.g. (Luma, R,G,B)

1D - luminance
2D - luminance + uniform 1D color space
3D/4D - luminance + splitted/"component" color space

+ Perspective projection, which is a vector transform, it can be represented as a multiplication of matrices - that is - the initial sources of visual data are of higher dimensionality than the stored representation, 3D is projected into 2D (a drawback of the way of sensing)/

Also, of course, there is topology, humans work fine with blended and deformed images - curved spaces, and curves, not simple linear vectors. However the topology is induced from the basic vector spaces, the simplest topological representation is just the adjacency of coordinates in a matrix.

The above may seem obvious, but the goal is namely to make things as explicit as possible...." 

+

Sunday, April 1, 2012


https://artificial-mind.blogspot.com/2012/04/jurgen-schmidhuber-talk-on-ted-creative.html
"Todor:  And it takes many months to get to 3D-vision and to increase resolution and develop 3D-reconstruction in the human brain. That adds ~86400 fold per day and 31,536,000 "cycles" per year.
What computing power is needed?

I don't think you need millions of the most powerful GPUs and CPUs at the moment to beat human vision, we'll beat it pretty soon, a lot of the higher level intelligence in my estimation is very low at its complexity (behavior, decision making, language at the grammar/vocabulary levels) and would need a tiny amount of MIPS, FLOPS and memory. It's the lowest levels which require vast computing power - 3D-reconstruction from 2D one or many static or motion camera sources, transformations, rotations, trajectories computations etc., and those problems are practically being solved and implemented...." 
Read More

Wednesday, July 3, 2019

// // Leave a Comment

Cognitive Science's Failure to Become an Interdisciplinary Field - the Multi-Interdisciplinary Blindness

Discussion of mine regarding the paper:

"Perspective | What happened to cognitive science?,  10 June 2019


https://www.nature.com/articles/s41562-019-0626-2?fbclid=IwAR2mMzO4qzIKINXT2BUcMZAS1ZI4PONd04SHQeF7FLgvH5VTc1pwR4DcLow

Available on Github: https://github.com/rdgao/WH2CogSci/blob/master/nunezetal_final.pdf?fbclid=IwAR17HIosUS-7EKdFT4a--SesVuwb3aPp-1a0yE4Rbk_q8io8w5S6nFVrWvY

https://www.facebook.com/groups/RealAGI/permalink/1230556097152988/

IMO a big share of the problem lays in the whole researchers's and overall intellectual direction in the academic circles (and power-and-profit driven societies, modern slavery). It is narrow knowledge and world view, specialization is promoted, ones who obey and execute instructions of their superiors grow the ladder and become leaders, doing the same. The creative, wide-minded and really original ones are not leaders of the research.*

That is related to multi-interdisciplinary blindness, related to insufficient working memory capacity and faculties for understanding and representing the inputs generally enough so that one can encompass the concepts from different domains and contexts and think of them together.

BK calls it too simply "depth of structure".



* In the past there were exceptions, such as Alan Kay
** That survey paper reminds me of my cycle "What's wrong with Natural Language Processing" some 10 years ago, because to me NLP/Computational Linguistics should have been a part of the AGI, not what they were.

** Sorry for the sick formatting, I had to write it in external editor etc., this one is annoying, but not now.

See elaborate related discussions:

#1

Circa 2009-2010 - series of 3 "perspective" articles



What's wrong with NLP, part I:

http://artificial-mind.blogspot.com/2009/02/whats-wrong-with-natural-language.html

Monday, March 23, 2009


http://artificial-mind.blogspot.com/2009/03/whats-wrong-with-natural-language.html



Note: now in NLP there are impressive results in NLG (generation), BERT etc. with such "mindless" vector representations, using current methods of machine learning, convolutions, "transformers" etc. however  it probably more or less emulates virtual sensory-motor interactions - by traversing and comparing huge corpora and how different texts/segments (mappings of sensory records) map and relate to each other, what's reasonable in what context. It is more advanced than as it was in the earlier simple frequency-based representations and inverse-frequency - frequency/probability of a token in current document, compared to average in the other documents etc.

#2 Friday, January 1, 2010

I will Create a Thinking Machine that will Self-Improve 

An Interview with Todor, "Obekty" magazine, issue November-December 2009

http://artificial-mind.blogspot.com/2010/01/i-will-create-thinking-machine-that.html   


- Where does the researchers' efforts should be focused in order to achieve Artificial General Intelligence (AGI)?

First of all, research should be lead by interdisciplinary scientists, who are seeing the big picture. You need to have a grasp of Cognitive Science, Neuroscience, Mathematics, Computer Science, Philosophy etc. Also, creation of an AGI is not just a scientific task, this is an enormous engineering enterprise – from the beginning you should think of the global architecture and for universal methods at low-level which would lead to accumulation of intelligence during the operation of the system. Neuroscience gives us some clues, neocortex is “the star” in this field. For example, it's known that the neurons are arranged in sort of unified modules – cortical columns. They are built by 6 layers of neurons, different layers have some specific types of neurons. All the neurons in one column are tightly connected vertically, between layers, and are processing a piece of sensory information together, as a whole. All types of sensory information – visual, auditory, touch etc. is processed by the interaction between unified modules, which are often called “the building blocks of intelligence”.

- If you believe that it is possible for us to build an AGI [Since you do believe], why we didn't manage to do it yet? What are the obstacles?

I believe that the biggest obstacle today is time. There are different forecasts, 10-20-50 years to enhance and specify current theoretical models before they actually run, or before computers get fast and powerful enough. I am an optimist that we can go there in less than 10 years, at least to basic models, and I'm sure that once we understand how to make it, the available computing power would be enough. One of the big obstacles in the past maybe was the research direction – top-down instead of bottom-up, but this was inevitable due to the limited computing
(...)


#3

Tuesday, August 27, 2013

Issues on the AGIRI AGI email list and the AGI community in general - an Analysis

https://artificial-mind.blogspot.com/2013/08/issues-on-agiri-agi-email-list-and-agi.html
"- Multi-intra-inter-domain blindness/insufficiency [see other posts from Todor on the list] - people claim they are working on understanding "general" intelligence, but they clearly do not display traits of general/multi-inter-disciplinary interests and skills.


[ Note: Cognitive science, psychology, AI, NLP/Computational Linguistics, Mathematics, Robotics … – sorry, that's not general! General is being adept, fluent and talented in music, dance, visual arts (all), acting, story-telling, and all kinds of arts; in sociology, philosophy; sports … (…) ... + all of the typical ones + as many as possible other hard sciences and soft sciences and languages, and that is supposed to come from fluency in learning and mastering anything. That's something typical researchers definitely lack, which impedes their thinking about general intelligence. ](...) "

Discussion:
http://artificial-mind.blogspot.com/2014_08_05_archive.html



Tuesday, August 5, 2014


#4

The Super Science of Philosophy and Some Confusions About it - continuation of the discussion on the "Strong Artificial Intelligence" thread at G+

Read More

Tuesday, July 2, 2019

// // Leave a Comment

SuperCogAlg & CogAlg Frame Blobs Visualisations | СуперКогАлг - алгоритъм за Универсален Изкуствен Разум

More recent visualisations from June, a completed primary bottom-up segmentation in the C++ version. Now working on deeper structures. I'm looking for partners and cofounders.

По-скорошни снимки от работата на прототипа на алгоритъм за универсален изкуствен разум, надграждащо се машинно обучение без учител (unsupervised learning). Засега изглежда като компютърно зрение и обособяване и разделяне на части ("клъстериране" и сегментация). "СуперКогАлг"* е на С++, за разлика от системата от която е разклонение - CogAlg, която е на Python. 

*Супер... Има и друго име, но ще го обявя по-късно - засега така ми хрумна заради "Супер Контра", откъдето е кадърът по-долу.












Illustration of the scanning, segmenting and merging process - so called blob formations, the first level of the 2D-version of CogAlg.




Read More