Thursday, December 30, 2021

// // Leave a Comment

Българската безплатна програма за видеообработка и ефекти - Twenkid FX Studio Alpha 0.1 първо издание



Видеото е с български звук. Най-общо въведение и представяне на програмата Twenkid FX Studio Alpha 0.1 - издание от декември 2021 г. Twenkid Studio е може би единственият ютюб канал, който си има собствена програма за монтаж и визуални ефекти. Тя е може би най-компактната система с подобни възможности - инсталационният файл е само 300 КБ, ръководството е 2 МБ заради многото снимки. Следват продължения с подробно ръководство за употреба, уроци и пр. Засега: инфо, ръководство (на английски) и изтегляне на програмата: http://twenkid.com/fx https://github.com/Twenkid/Twenkid-FX-Studio/ Ако харесвате канала, подкрепете го като се абонирате, споделяте, коментирате, харесвате!

Read More

Wednesday, December 29, 2021

// // Leave a Comment

First Release of Twenkid FX Studio Alpha 0.1 - Free NLE Video Editor and Effects Software

Finaly a release of the software...
See more in the manual and in the intro video, more to be made.
http://twenkid.com/fx

Български безплатен софтуер за видеообработка, видеомонтаж, визуални ефекти, 2010-2021.
Версия 0.1 алфа.

Read More

Friday, December 3, 2021

// // Leave a Comment

Харесвания във Фейсбук и Социалните мрежи - Какво всъщност показват и обективни ли са? Първа Част


Чета и обяснявам статията  "Недостатъци на гласуването с харесвам/нехаресвам в Уеб 2.0 и социалните медии и различни дефекти в системите за обществено оценяване и класиране" 


  
                                       




https://youtu.be/nWX4myd75M4

https://youtu.be/BEwPzDYAYsk


От блога:

Недостатъци на гласуването с харесвам/нехаресвам в Уеб 2.0 и социалните медии и различни дефекти в системите за обществено оценяване и класиране | In Bulgarian - Issues with Like-Dislike Voting Ranking Systems - a translation from the original work in English | Artificial Mind - Interdisciplinary Research Institute (artificial-mind.blogspot.com)

Харесвания във Фейсбук и Социалните мрежи - Какво всъщност показват и обективни ли са? Първа Част

1. Част. Чета и обясняваме статията "Недостатъци на гласуването с харесвам/нехаресвам в Уеб 2.0 и социалните медии и различни дефекти в системите за обществено оценяване и класиране".


0:00 Въведение

0:20 Абонирайте се, харесвайте, подрепете

0:30  Анонс за серия за трансхуманизма и космизма - отговор на обвиненията на проф. Иво Христов, Иван Спиридонов и др. - Човекът и мислещата машина или ...

1:40 Изпушил български киборг ИЗОТ

1:47 Начало на четене на статията - объркани и мъгливи замисъл и мерки, ... заличава подробности 

2:12 ... Интердисциплинарна - Психология, Маркетинг, Социология, реклама, езикознание/лингвистика, психолингвистика, избор/вземане на решения, ... също философия

3:07 ПейджРанк PageRank, ... звезди 1 до 5, рейтинги, ... Ютюб Youtube - Как точно зрителите преценяват кой колко звезди заслужава?

3:54 Палец нагоре/надолу, харесвам/не харесвам ... 

(...)

Защо и какво  "лайкваме"? Каква е функцията на харесванията? Какво всъщност показват? Втора Част

2. Част. Чета и обясняваме статията "Недостатъци на гласуването с харесвам/нехаресвам в Уеб 2.0 и социалните медии и различни дефекти в системите за обществено оценяване и класиране".


0:00 Въведение -  Абонирайте се, харесвайте, подрепете

0:30 Социални медии и мрежи, обобщения от първа част и каква е целта на харесванията?

1:25 Истинската цел на лайковете

2:20 Как се преценява кое натежава за харесване?

3:00 Полезно за обществото - как се оценява и от кого?

4:00 Изкривявания на преценката. Склонности при избор: най-лесното с най-малко усилия.

4:30 Прилично и неприлично, скандално и банално.

5:10 Клипове от известни хора и от неизвестни - кой се харесва?

5:45 Продавачи и разпространение, търговия

6:00 Пример с AMD и Intel - процесори Athlon, Athlon 64, Pentium III, Pentium IV ... Athlon FX ...

7:30 Разлика в мащаба на конкурентите

Не всички гласуват - някои само харесват, други само не харесват

Да се гласува по всички показатели от всички


(...)

Каква е функцията на лайковете/харесванията? Тренировка за извършване на друго действие.


2:13 Пакво е обществото и кое е полезно за обществото? Как се определя?


Like-Dislike-2-Chast-29-11-2021_15-33min_900k_192K_25fps.mp4

Like-Dislike-2-Chast-29-11-2021_15-33min_900k_192K_25fps.mp4
Read More

Friday, November 12, 2021

// // Leave a Comment

AGI-21 Talks - Review/selected talks and workshops

In general, the old stuff is still circulating, I expected something more novel or deeper, unexpected.

Selected lectures from the sessions:

Day 1:  https://www.youtube.com/watch?v=yky-9rZVZEQ

AGI-21 Conference Day 1 - Room 1 - Scaling up Neural-Symbolic and Integrative AGI Architectures

Temporal and Procedural Reasoning - 15:00 min
- Temporal logic
- Monoaction, diaction, polyaction plans (37:)
- Behavior tree 46: ...

Goerzel's talk 7:04 - General theory of general intelligence


Day 2: 3:20 h
3:22:55 Cognitive Science and the Path to AGI Joscha Bach Or about or after ~ 3:44 h, or mostly 3:48 h+ - 4:03 h These are mostly general things, but it doesn't hurt to revisit them and find something new.

Day 3

Bengio's lecture.
* 1:32 h
(the discoveries/principles related to "consciousness prior are well known since at least early 2000s though)

* Mikolov lectures was superficial for me

* 5:04 h - Johnatan Warrel - an interdisciplinary guy, who "started with music "- a good lecture on semantics, logic

...

* Modularity 6:47 h

* Sigma cognitive architecture 8:04 h 

Read More

Wednesday, November 10, 2021

// // Leave a Comment

Twenkid FX Studio Alpha release - Video Editor and Visual Effects Software Introduction - Part 1




Twenkid FX Studio Alpha release - Video Editor and Visual Effects Software Introduction - Part 1

My custom video editing and effects software prototype that had to be first released 11 years ago after the initial R&D effort, possibly open source, but (...) This editor was used for the production of many videos in my Youtube channel with the most complex one looking impressive anyway: the artistic lyrically-"muscle art" music video suite "Star Symphony in Chepelare", with its most part being a VFX (cartoon-like), embracing a bunch of custom visual effects and color grading, implemented from scratch; a complete director's version, short version, action version and a lyrical forest only version.

https://youtu.be/eUtk5Xxb89A

#videoediting #software #prototypes #art #visualeffects #vfx #video #tutorial



Read More

Friday, September 24, 2021

// // Leave a Comment

Making of the Visual Effects of the "Dream in a Summer Rain" - Artistic Nature Short Film from the "Wild Plovdiv" series (2019)

The movie is a sweet impressionistic story about the life of a cat "pride" just before, during and just after a short summer storm: what they do, where and how they hide, how they evade the drops, how they fight and rest etc.

* на български: http://twenkid.blogspot.com/2020/09/18-visual-effects-in-dream-in-summer.html









The VFX was unnoticeable by a test viewer (just one though) and is blending well with the painting-like aesthetics of the shot, although it is far from perfect.

In short: A small and smart programmatic trick for removing an unwanted logo from a difficult and highly zoomed-in unstable shot during a hard rain.

In more details:

* A shot with a big zoom, filmed with an unstable camera and somewhat noisy picture during hard rain, where the individual drops are seen. What was the goal of the effect? Using a simple-enough to develop smart method which automatically removes an unwanted logo of a supermarket, which is constantly being crossed by the drops. The retouched shot shouldn't look unnatural and the drops should be preserved, i.e. the dynamics and look and feel of the drops have to be preserved.

The making of the shot shouldn't take too much time, because it was only one shot, less than 3 seconds long, and there were 11 other episodes from the "first run" of that series. Yet I didn't want to scrap the shot, because, aside from the logo, the rest of the image was picturesque and interesting.

I coded the effect in Python with OpenCV, where technically the latter served only as a video and image interface for I/O, because the processing is "manual": the image is accessed "pixel by pixel", not with some specific cv2 methods. As an idea it was inspired by VFX I developed for my 2018 fantasy "muscle art" and nature music video suite "Star Symphony in Chepelare", most of which was a VFX and had a cartoon-like look of a painting.

An alternative: if it was not raining and if the camera/shot were stable, the frame could be painted by hand by rotoscoping with GIMP or Photoshop once and then the painting be multiplied in the subsequent frames. The scene wouldn't be so interesting, though.

Note: The masked logo could be shaded a bit lighter, in order to match the shade of the supermarket façade, however as it was painted it is also reasonable, as it appeared like a continuation of the metal column.

Code: https://github.com/Twenkid/Twenkid-FX-Studio/tree/master/Py/MaskRainyShakyLogo

Python, OpenCV PURPOSE: Automatically remove a logo in shaky, rainy, noisy video and preserve the raindrops crossing the logo

The whole movie: https://lnkd.in/dnV9Q9aF

#opencv #computervision #video #python #programming #animation #art #visualeffects #hacks #painting #naturelover #catsanddogs #animals #plovdiv #bulgaria #bulgarian #vfxartists #specialeffects
Read More

Saturday, September 18, 2021

// // Leave a Comment

GPT2-Medium Training from Scratch on Google Colab for Any Language - Tips & Tricks by Twenkid

My tips for training on Colab on Tesla T4 16GB, based on the obstacles that I had to overcome. I trained on a custom-built Bulgarian dataset. https://youtu.be/F-Xt-cK4L-g The code is based on: Arshabhi Kayal's tutorial: https://towardsdatascience.com/train-gpt-2-in-your-own-language-fc6ad4d60171 However his example was for local training on a modest Nvidia Geforce RTX 2060 (6 GB) for GPT2-Small (3 times smaller) and a fixed dataset. The code in the experiments in this video was extended and debugged for application in the Colaboratory which has its hurdles and for gradual extension of the dataset after each training epoch, without retraining the tokenizer (see Dataset Skip None in the video). ...

Some impotant points and discoveries: * Google Colab hurdles (the dataset should be sampled in parts, can't run too long epochs at once)
* The inputs/labels output of tokenization after changing the dataset should be filtered (Dataset Skip None)
* Etc.

...

Съвети за машинно обучение на GPT2-Medium модел на български или друг език от нулата през Google Colaboratory, от Тодор Арнаудов - Тош/Twenkid. Следва продължение. Errata: ~ 2:05 Tesla K80, not P100.
#gpt2 #transformers #tutorial
Read More

Tuesday, September 14, 2021

// // Leave a Comment

Смарти - Най-умният Английско-Български Речник: Безплатен и Мощен | Smarty - the most powerful comprehension assistant

Интересно и подробно видео представяне наа интелитентния речник, помощник в разбирането "Смарти", който беше най-мощното подобно приложение в света, когато беше създадено, въобще в категорията умни речници/езикови помощници - и все още ще да е един от най-мощните, макар че имаше огромно поле за развитие. За двойката английско-български съм срещал само един съвременен само с частично сравнима интерактивност и само в една от всички категории на "Смарти", защото работи само по с по една дума, няма изрази и пр. Другите онлайн речници са от най-простия вид като от преди 20 години.





Гледайте видеото:  https://youtu.be/QPWfpYwT_Ic


Най-мощният интелигентен английско-български речник "Smarty/„Смарти“, на който бях архитект и разработчик в изследователската група по компютърна лингвистика в Уулвърхамптън, Англия с научен ръководител проф. Руслан Митков. Речникът е безплатен и може да го използвате на всяко Windows PC, дори да е само с 256 МБ RAM и .NET 2.0 (C#) Какво е умен речник или помощник в разбирането? Това е приложение, което не извършва пълнотекстов превод, така че не може да допусне и фрапиращи грешки, и в същото време е много по-мощен и "умен" от обикновен електронен речник, "разбира" езика и подпомага и ускорява превода и изучаването на чужди езици, чрез по-сложна автоматична обработка на текста и чрез по-взаимодействащ и богат потребителски интерфейс. Разпознаване на изрази, на части на речта (съществителни, прилагателни, глаголи, наречия), търсене по окончания на думи, онтологията/синонимен речник - Wordnet и Balkanet и др. Обработка на естествен език, езикови бази данни, NLP, Natural Language Processing, Computational Linguistics. Wolverhampton Research Institute in Information and Language Processing. RIILP ... Разказва също за Xerox Locolex comprehension assistant, SA Dictionary, Езикотворец - речник на юнашкото наречие и сметачобългарския, ДЗБЕ. (...) Лематизация, нормализация, т.нар. Стеминг ((stemming) - намиране на корена на дума или основна форма – например при търсене или посочване на дума в мн.число или спрегнат глагол - програмата знае спрежения на неправилните глаголи, времена, образуване на мн.ч., части на речта и пр. Разпознаване на изрази чрез размито сравняване с шаблони и показване без да е нужно да ровите в статията. (...) Бързо разглеждане чрез посочване на дума от текста. Речник по окончания на думи/край на думи - търсене на рими, съчиняване на стихове. Виж също Reverso Context - много години/10-тина години след "Смарти". Българска и английска лексикография, езикознание. Дружество за защита на българския език - ДЗБЕ.

Read More

Saturday, September 11, 2021

// // Leave a Comment

Who is the fittest AI researcher or developer in the world?* [FUN FACTS]

Watch the performance: https://youtu.be/e95ZdzVzSsc


Edit: Outdated video. 1-2 months later I've been in a better shape: both conditioning, legs and performance at about 28 reps per minute +- , depending on the technique. Funnily (but not new for my training).



https://youtu.be/9yYVibUT32E


The video is from late August 2021, these are an year old shoots for the project "Bulgarian Ninja", or a later name: "Balkan Ninja" -  a project for a fighting video game (in Bulgarian) ~ 10.2020.


 



Well: not the fittest developer in any domain, and not working out too hard either.

For the fittest developer and also man, see the Belarusian guy Max: Максим Трухоновец. He seems as the best/the beast & the G.O.A.T. Max is Guiness record holder in many street work-out disciplines.


Read More

Saturday, September 4, 2021

// // Leave a Comment

Body Art & Muscle Art using 3D Depth Camera made of two Logitech C270 webcameras - Бодиарт с 3D-камера



https://youtu.be/pIGcdBaoZxU

Depth camera body art / muscle art - done with two Logitech C270 and opencv coding in python. If anyone cares, the code is in my github repository that is linked in the video description, in the recent commits. Note that the current version of the code is experimental and not polished. That output also could be interpolated etc. to fill-in black discontinuities etc., however as of "art" they serve as contours etc.

Author, coder and model: Todor.

To be continued...

...
"Бодиарт"/мускулно изкуство създадено чрез стерео камера (3D-камера, камера за дълбочина). Направих го с две камери Лоджитек C270 и програмиране на Питон с OpenCV.
Автор, програмист и модел: Тодор - Тош.

Следва продължение...

Read More

Friday, August 20, 2021

// // Leave a Comment

Видео: Общ изкуствен интелект – Конференция по Самоусъвършенстващ се ИИ в Пловдив: Self-Improving General Intelligence SIGI 2012-1


https://youtu.be/9a0IrhsLVgY

Видео разказ за първата по рода си в България конференция по Общ ИИ в Пловдив през 2012 г. и информация за развитието на някои от участниците в нея.


Имах намерение и желание да бъде редовна, поне по два пъти в годината, вероятно частично онлайн, затова и името беше "2012-1", но тогава не успя да се продължи.


Сега може да има нов опит. Очаквайте продължение: следете и ФБ групата по Универсален изкуствен разум.

 


 

 

Read More

Thursday, August 12, 2021

// // Leave a Comment

Google announce they are working on an AGI algorithm called Pathways

Scarce info on reddit. Thanks to E. for the link.

Posted byu/QuantumThinkology - 23 hours ago
Google is developing a nimble, multi-purpose AI that can perform millions of tasks. Called Pathways, Google’s solution seeks to centralize disparate AI into one powerful, all-knowing algorithm, Jeff Dean, Google’s AI chief revealed

https://qz.com/2042493/pathways-google-is-developing-a-superintelligent-multipurpose-ai/?utm_source=reddit.com
Read More
// // Leave a Comment

ETH AI Center - Todor's 2003 interdisciplinary AGI research strategy materialised in Switzerland's Technological Institute


Compare with the strategical suggestions in this 18+ years old essay: https://translate.google.com/translate?sl=auto&tl=en&u=https://artificial-mind.blogspot.com/2020/07/interdisciplinary-research-institute.html

In the 10.2020 message they mention 29 professors, as of 7.2021: 93.
https://ai.ethz.ch/about-us/faculty.html
20.10.2020 
https://ethz.ch/en/news-and-events/eth-news/news/2020/10/pr-new-centre-for-ai-research.html
Press release ETH Zurich is opening a new research centre for artificial intelligence.
A core team of initially 29 professorships, a new executive director and a Fellowship programme aim to promote interdisciplinary research into this key technology. (...)
Core team with around 29 professorships The new research centre is made up of core members, associate members and AI fellows. The core members already comprise 29 professors from seven departments who specialise in fundamental AI topics such as machine learning, perception and natural language understanding. These members will also supervise the new fellows of the ETH AI Center, talented young AI scientists who have been recruited from around the world and awarded a scholarship. “This will enable us to train experts who will be able to implement interdisciplinary AI projects,” says Günther. The ETH AI Center will also maintain contact with associate members from all ETH departments, other institutions and the private sector in order to implement joint research projects. (...)
"
M.V's. post on linkedin (bold: mine):

"...The density of AI activities here cannot be found anywhere else in the world..." -- Dr. Alex Ilic Executive Director, ETH AI Center

Sounds about right :)

Alex Ilic doing amazing job in creating the ETH AI center, a world leading science and research inter-disciplinary think-tank in the general AI space, with nearly 100 world-class professors at ETH from a wide range of disciplines, including statistics, mathematics, computer science, physics and many more. Exciting times!


Comment of mine from a few days ago:

Finally... Well, Stanford and MIT embarked on something like that in 2018, but, in fact, such an interdisciplinary institute, even "more interdisciplinary" in its philosophy, was proposed *18 years ago* by me in Plovdiv, Bulgaria, where also the world's first interdisciplinary course in AGI was offered in 2010 and 2011 at Plovdiv University by the same author. Note that the MIT's course was from 2018 and was just a compilation of lectures by various guests, without a coherent structure, unlike the Bulgarian one. By the way, the 2003 proposal explicitly mentioned the term "Strategy" for AGI (Thinking Machine), as well as code synthesis (automatic programming) as an important milestone. The work was an essay for an investments proposals competition, called: "How would I invest one million Euro with the greatest benefit for the development of the country?": https://artificial-mind.blogspot.com/2020/07/interdisciplinary-research-institute.html
An archived version of the original: https://www.oocities.org/todprog/ese/proekt.htm
 
More info: in the book "History, principles and pioneers of the Artificial General Intelligence, or How an 18-year old researcher from Plovdiv was 15 years ahead from MIT and Stanford in AGI".

Kind Regards


As you know I've been preparing, mostly in mid-2020, a big book (initially in Bulgarian) which serves as yet another invitation for partners, investors and sponsors for the creation of that projected institute-research company. The title is being evolving, the current one changed to just mentioning "by the author of the world's first interdisciplinary course in AGI" or in occasions self-promotingly "by the prodigy author of ...".

However I am still unsure how exactly should I present it; probably starting with podcasts etc. Selling it commercially in its entirety seems unlikely and it may need heavy editing and trimming, also splitting it in many volumes. It is about 1030 A4 pages at the moment, in one file... :))





Read More

Saturday, August 7, 2021

// // Leave a Comment

Общ изкуствен интелект и Трансхуманизъм: Какво е Универсален изкуствен разум - AGI?

Гледайте нов запис (с допълнителни бележки) на една от уводните лекции от първия в света интердисциплинарен курс по Artificial General Intelligence - AGI.


 


 


https://youtu.be/r582MVKj81A

Слайдове др.:

http://research.twenkid.com
http://artificial-mind.blogspot.com 


Следват продължения! 


Съдържание: 0:00 Въведение
0:30 Защо възникват новите термини: AGI, Универсален изкуствен разум
1:13 За обзора на тесния ИИ - препратка към друго предаване
1:40 Първенството на курса в света: 8 години преди MIT
2:42 Първата фейсбук група по AGI е българска: 5 години преди другите
3:16 Какво е Универсален изкуствен разум? (УИР)
3:28 За ИИ из интервюто за сп. "Обекти" бр. 5, 11/2009
4:20 Накратко за УИР - слайд 3
4:52 Как ще изглежда мислещата машина? - из интервюто за "Обекти"
5:39 Някои термини, накратко за УИР - Слайд №3
6:11 Машината на Гьодел на Юрген Шмидхубер
6:21 Личности в общия ИИ
6:21 Какви опити се правят за създаване на УИР? - из интервюто
7:33 Разграничения между общия и слабия/тесния ИИ и недостатъци на втория - Слайд №4
9:25 Какво все още пречи да се създаде мислеща машина? - из интервюто
10:24 Технологична сингулярност, Кърцуейл, миниатюризация
10:42 Singularity Institute - из интервюто
10:50 Технологична сингулярност - закон на Мур - Слайд 7
11:16 Прогнози за планирания технологичен размера на транзисторите
11:34 Какво реално се случи в полупроводниковата индустрия
11:41 Краят на миниатюризацията и какво следва?
12:05 Квантови компютри
12:13 Нанотехнологии и молекулярни технологии
12:17 Киборзи
12:22 Ред е на мислещите машини - надтелесност и многото възможни форми на машинния ум
12:54 Мислещите машини са...
13:23 Разумът е предсказател на бъдещите възприятия въз основа на миналия опит - из интервюто
14:04 А ако машините решат да ни унищоат като въстанат срещу нас? Истинското послание на филма "Терминатор 2" - кои всъщност са "лошите"?
15:20 Етика за роботи - трите закони на роботиката на Азимов
15:45 Приятелски настроени машини - Friendly AI
16:28 Singularity institute - институт по сингулярността ("сингуларността") на Рей Кързуейл
16:59 Какво е "душа"? - препратки към литература
17:49 Какво е съзнание?
19:42 Трансхуманизъм или H+ - Transhumanism; свръхчовечност, надтелесност, космизъм
20:25 Прехвърляне на съзнанието и възможно ли е? - Mind Uploading
21:13 Парадокси на човешкия мозък: универсален или специализиран? Изчислителна мощ.
22:25 Хибриди: универсален разум + универсален компютър
23:12 Brain-Computer Interface BCI - пряка връзка мозък-машина
23:20 Буквална симулация на мозъка: Almaden Institute на IBM и Blue Brain
23:49 Заключителни думи и покана за следващите лекции и предавания
#изкуственинтелект #agi #artificialgeneralintelligence #ИИ #УИР #футурология #трансхуманизъм #космизъм #сингуларност #сингулярност #h+

Read More

Tuesday, July 20, 2021

// // Leave a Comment

Intel is approaching AGI - Cognitive AI

A recent talk by Intel is revealing that they have a clue about the direction:

SL 5 Gadi Singer - Cognitive AI - Architecting the Next Generation of Machine Intelligence

Some commented that this is a new hype term, however in mid-2000s there was "Cognitive Computing" (it is used here in the blog, too). Hawkins/Numenta were part of that branch, so it is not so new, related to Cognitive Architectures etc.
Read More

Friday, July 16, 2021

// // Leave a Comment

ASTRID - Cognitive Architecture by Mind Construct based on bootstrappng Common Sense knowledge and knowledge graphs

ASTRID, or "Analysis of Systemic Tagging Results in Intelligent Dynamics" is a Cognitive Architecture project of the dutch AGI company "Mind Construct". They express bold statements about their prototype.  Learn more:  https://mindconstruct.com/website/astrid
and the paper: https://mindconstruct.com/files/selflearningsymbolicaicritcalappraisal.pdf

I agree with the claim ot the ASTRID authors that "symbolic" systems can learn. The problem of the systems which don't is that their "symbols" are too narrow and lack the appropriate "connectedness"/scanning/exploring perspective, generality, potential to map to the others, in short: too "flat" and simple.

P.S. However see the article: Neural Networks are Also Symbolic - Conceptual Confusions in ANN-Symbolic Terminology: https://artificial-mind.blogspot.com/2019/04/neural-networks-are-also-symbolic.html

I don't like the misnomer "symbolic". More appropriate terms for "symbolic" AI are logic-based, logical, conceptual (opposed to [purely] "numerical"); discrete? (or more discrete and more precisely addressable than the "subsymbolic"), "structured", "knowledge-based", "taxonomy-ontology-...", more explicit etc. (many other possible attributes).
 
On the other hand some methods which are called "subsymbolic" are more "numerical", but they eventually or at certain levels also become "symbolic" or logical and usually are mapped to some "symbolic" categorization (last layer). 

Both do and should join and the "logical" and "discrete" at the lowest level and in its representation is also "numerical" and "analog" to some extent. It is some digital operation in a computer, when operating on visual, auditory etc. raw sensory data it is numeric etc.




Read More

Sunday, July 11, 2021

// // Leave a Comment

On Free Will as an Ill-Posed Problem | Improperly-posed problem

Comment of mine on a post in Real AGI group.
referring to an article "The clockwork universe: is free will an illusion?"

Todor: Whether or not somebody or something has a "free will" depends on how exactly "free will" is defined, both "free" and "will", also "you". I think all that discussion and the "catastrophic" consequences are largely anglosaxon-centered views or ones belonging to "control freaks" and sound like sophisms. Of course one can never be of full control of "his" choices, you are formed by an endless amount of "external" forces, there are large portions of life where "you" is unconscious, the processes of your orgranism in complete detail depend on everything, it's of your "full" control only if you are God. It's hard to define what "you" is, where exactly the boundary lays, and obviously what "you" can realize and enumerate or deliberately control is almost nothing of the bitrate that describes your body and all processes in maximum resolution. The intentionally muscle-controllable trajectories are of mere bits, while a zillion bits describe just one CELL of the body in any encoding. The body is also a tiny bit of the whole Universe, where a principle of superposition is in effect etc., everything is computed by interacting with everything else.

IMO that is not supposed to cause existential catastrophe unless one is prone for that due to "control-freak-ism" or something - nothing follows from the lack of the "complete control", it's not the end of the world, unless one believes he was god and now he finds that he wasn't.

"They argue that our choices are determined by forces beyond our ultimate control – perhaps even predetermined all the way back to the big bang – and that therefore nobody is ever wholly responsible for their actions."

This is not a new argument.

Ultimate control - the mania of some cultures and tyrants.

However responsibility as localisation of causal forces, given a resolution and method of factorisation, is another question.

An answer from the poster of the link:
A.T.: I don't want to get into the discussion of free will. If you don't think humans have free will, then you won't care that robots with AGI almost certainly don't have free will. Humans may or may not have free will, but robots cannot, as we know the "clockwork" of their operation. Even using a rand() function that uses pseudo-random numbers won't change that fact, even if the seed is altered. That can always be determined so that the deterministic outcome is theoretically known. As I said, I thought it might be a post to pose a question that I have not seen posed this way before.Even using a rand() function that uses pseudo-random numbers won't change that fact, even if the seed is altered. (...)
...


(The following is posted only here)

Todor: You only believe that you know the "clockwork" of their operation (of a machine, AI etc.). In fact you may say the same for anything with a certain resolution. If randomness is the "free will" freature or component then electrons and all physical particles in the quantum model have "free will", which is fine, however therefore everything has that "free will" and defined like that this concept is meaningless, because it applies for everything and clarifies nothing.

The "theoretically known" part is true for everything as well: if you were God, if you could read the content of everything fast enough without changing it etc., then you would know the "clockwork" for everything. "In theory" you could be, and as of computers and robots: they are part of the whole Universe and in interaction with them, if they interact and their behavior, operation, knowledge etc. are impacted by entities or parts with "free will" within Universe, then they also would have that property and their actual "body" extends to everything that impacts them.

Therefore one must first define exactly what "free will" is and what it is not. Whether or not anything has or doesn't have anything depends on the exact definition. Also humans or whatever can have "free will" even if it's considered "deterministic" or predictible, will and free will as I see it is not about being not deterministic, "free" is not about being random (except in these confused believes).

For example see the definition of Hegel, Engels/Marx, thus Dialectical Materialists: they are deterministic, their definition of free will is to act in accordance with the necessity, i.e. to understand what to do, to be conscious of the possibilities and the desired outcome and "the best" way to achieve it etc. and a lack of free will is if the agent "doesn't understand" (but yet that must be precisely defined, otherwise it's generalities and sophisms), thus if your choice is random and you can't explain it you are also not free, but dependent on the "will" of the randomness or "the Fortune" (instead of "your own" also).

Having or not having anything, doesn't imply anything on its own and has no intrinsic ethical consequences by itself; the ethical consequences are of political and ideological origin. "Lack of God" doesn't mean that "everything is permitted" (Dostoyevski sophism), neither if you consider that an ant or a PC or a tree "does not have free will", that consideration on its own does not impliy (or it does not follow from it) that you have or don't have to do anything with it.

Similarly the fact that other humans are supposed to have "a soul" or "free will" and if the agent "believes that" couldn't stop a murderer, a psychopath or a criminal, a warrior/general or a plain soldier, or any "evil one" etc. Respectively, if you like/love animals or even "inanimate objects" - plants; weapons, cars, computers, toys, books, precious memories - you may handle them with care and love, because that's what *you* feel, it's subjective.

The randomness (disconnected from everything, supposedly) for "free will" is actually dependent on the Universe as a whole which only "predicts" the exact values - so that "freedom" is most dependent (of the whole).

"Freedom" in general as some kind of "independence" (or dependence) is within the decided/given framework and resolution.
Read More

Wednesday, July 7, 2021

// // Leave a Comment

Todor's Comments on the Article "AI Is Harder Than We Think: 4 Key Fallacies in AI Research" - no, AGI is actually simpler than it seems and you think

Comment of mine on the article "AI Is Harder Than We Think: 4 Key Fallacies in AI Research" https://singularityhub.com/2021/05/06/to-advance-ai-we-need-to-better-understand-human-intelligence-and-address-these-4-fallacies/

Posted on Real AGI FB group

The suggested fallacies are:

1. Progress in narrow intelligence is progress towards general intelligence
2. What’s easy for humans should be easy for machines
3. Human language can describe machine intelligence
4. Intelligence is all in our heads

(See also the article)

The title reminded me of a conclusion of the "AGI Digest" letters series where after giving the arguments I noted that: "AGI is way simpler than it seems". See the message from 27.4.2012 in "General algorithms or General Programs", find the link to the paper here:

https://artificial-mind.blogspot.com/2017/12/capsnet-capsules-and-CogAlg-3D-reconstruction.html   https://artificial-mind.blogspot.com/2021/01/capsnet-we-can-do-it-with-3d-point-clouds.html.html

 

Summary

⁠In brief: I claim it is the opposite: AI is easier than it seems (if one doesn't unerstand it and confuses herself, it's hard, right). Embodiment is well known and it lays in the reference frames and exploration-causation, stability of the coordinates and shapes and actions, repetitiveness etc. not in the specific "material" substrate of the body. The "easy for humans..." is well known and banal, also the point against machines is funny: in fact humans also can't "apply their knowledge in new settings without training" (see the challenges in the article) etc. IMO progress in "narrow" AI actually is a progress towards AGI and it was so even in the 2000s, as current "narrow AI" ML methods are pretty general and multi-modal and they give instruments to do processes which were attached to "AGI" at least since the early 2000s, such as general prediction and creation, synthesis. Current "narrow AI" does Analysis and Synthesis, but not "generally enough in a big enough and "integrated enough" and "engine-like-running" framework which connects all the branches, modalities and knowledge together, however the branches and "strings" are getting closer. Practically, one can use as many "narrow" NN with whatever glue code and other logic in a system.

Discussion

1. "Progress in narrow intelligence is progress towards general intelligence" [are not progress towards GI] 

— IMO it actually is a progress, because the methods of the "narrow" become more and more general, both in what they solve and in the ambitions of the authors of these solutions. After a problem or a domain is considered "solved" to one degree or another, the intelligent beings direct themselves to another one, or expand the range, or try to generalise their solutions of several problems so far and combine them etc.

One of the introductory lectures in the first university course in AGI back in April 2010, which I authored, was called "Survey of the Classical and Narrow AI: Why it is Limited and Why it Failed [to achieve AGI]?": http://research.twenkid.com/agi/2010/Narrow_AI_Review_Why_Failed_MTR.pdf

 

 

While wrapping up the faults as I saw them, one of the final slides and others in the lecture, matched with one of the main message of the course - hierarchical prediction and generalisation, - suggested that the methods of the advanced "narrow AI" actually converge to the ideas and methods of AGI. Even  image and video compression for example share the core ideas of AGI as a general sensory-motor prediction engine, so MPEG, MPEG2, H264 - these algorithms in fact are "AI". "Motion compensation", the most basic comparison, is related to some of the primary processings in the AGI algorithm CogAlg, all "edge-detections" etc. are something where any algorithm searching for shapes would start or reach in one way or another. Compression - finding matches ("patterns), which is also "optimisation" - reducing space etc.

Two of the concluding slides (translation follows): 




"The winner of DARPA Urban Challenge in 2007 uses a hierarchical control system with multi-layer planing of the motions, a behavior generator, sensory perceptions, modeling of the world and mechatronics".

Points of a short summary, circa early 2010:

What's wrong with NLP? (from articles from 2009) [and "week" AI]: 

* The systems are static, require a lot of manual work and intervention and do not scale 

* Specialized "tricks" instead of universal (geneal purpose) systems 

* Work at a very high symbolic level and lack grounding on primary perceptions and interactions with the environment 

* The neural networks lack a holistic architecture, do not self-organize and are chaotic and heavy. Overall: A good "physics" is lacking, one that would allow creation of an "engine", which to be turned on and then to start working on its own. The systems are instruments and not engines.

Note, 7.2021: The point regarding the NN however can be adjusted:

Many NN can be stacked or connected with anything else in any kind of network or a more complex system - we are not limited to use one or not use any glue code or whatever. The NN and transformers are actually "general" in what they do and are respectively applied for all sensory modalities and also multi-modaly. 

Complete or powerful enough for a complex simulated/real world sensory-motor multi-modal frameworks are not good enough and these algorithms may be not the fastest to find the correlations and have unnecessary brute force search which can be reduced by more clever algorithms (and they should), however these models do find general correlations in input.  

 2. "What’s easy for humans should be easy for machines"

—  Isn't that banal, also it is vague (easy/hard). Actually some of the skills of the 3 or 4 years old are achieved by 3 or 4 years long training in supervised settings: humans do not learn "unsupervised" except basic vision/low level physical and sensual stuff (language learning is supervised as well; reading and writing: even more).

Test people who didn't attend school at all, check how good they are in logic for example, in abstract thinking, in finding the essential features of the objects or concepts etc. Even people who have university degrees could be bad in that, especially BAs.

There are no machine learning models with current technology from the "narrow AI" which are trained for that long yet, an year or years with current compute. We don't know what they could achive, even with todays' resources.

On learning and generalising: "If, for example, they touch a pot on the stove and burn a finger, they’ll understand that the burn was caused by the pot being hot, not by it being round or silver. To humans this is basic common sense, but algorithms have a hard time making causal inferences, especially without a large dataset or in a different context than the one they were trained in."  

That's right about training if you use a very dumb RL algorithm (like the ones which played for 99999 hours in order to learn to play the basic games on Atari 2600), however overall the "hardness" of learning this by a machine is deeply wrong and not aware of what the actual solution could simply be:

"An algorithm" would have sensors for temperature which will detect "pain", caused be excessive heat/temperature, which happened at the moment when the coordinates of the sensor (the finger) matched coordinates within the plate of the stove. Also, it could have infrared sensors or detect the increment of the temperature before touching and detecting that there is a gradient of the measurement. The images of the stove when the finger was away didn't cause pain, only the touch. This is not hard "for an algorithm", it's trivial.

4. Intelligence is all in our heads

— Wasn't that clear at least since 20 years? (for me it was always clear) However, taking into account, that the embodiment can be "simulated", "virtual". The key in embodiment are the sensory matrices, coordinates ("frames of reference" in Hawkins' terms) and the capability to systematically explore: cause and perceive/study the world; the specific expressions of the sensory matrices and coordinates could vary.

3. Human language can describe machine intelligence
"Even “learning” is a misnomer, Mitchell says, because if a machine truly “learned” a new skill, it would be able to apply that skill in different settings" 

+ 1. "a non-language-related skill with no training would signal general intelligence"

— I challenge these "intellectuals": can you make a proper one-hand backhand with a tennis racket with "no training"? (Also how long will you train, especially before delivering a proper over-the-head service with good speed or a backhand, while you are facing back to the net; or a tweener (between the legs shot, especially while running back to the base line etc.)

You're not supposed to need explicit training, right? You did move your hands, arms, wrists, elbows;  legs, feet... You've watched tennis at least once on TV sports new, therefore you should be able to just go and play against Federer and be on par with him, right?. If you can't do that even against a 10-year old player, that means "you can't apply your knowledge in new settings"...

Can you even juggle 3 balls: by "applying your knowledge of physics from school and sense of rhytm from listening to music and dance - even the simplest trick.

Can you play the simplest songs on a piano : by applying your understanding of space and motion of the hand and find the correlations with the pressing of the keys and the sound of each of them etc. - can you do it especially if you lack musical talent.

Well, "therefore you lack GI", given your own definitons... I'm sorry about that... 

(In fact the above is true for many humans; humans really lack "general intelligence" by some of the high-bar definitions which a machine is expected to met before being "recognized") 

...

Слайдът на български (4.2010):

* Какво не е наред в обработката на естествен език [и слабия ИИ]?

●Системите са статични, изискват много ръчна намеса, не се развиват и мащабират.
●Специализирани „трикове“, а не универсални системи.
●Работят на високо символно ниво и нямат основа от първични възприятия и взаимодействия със средата.
●Невронните мрежи нямат цялостна архитектура, не се самоорганизират и са хаотични и тежки.Липсва добра „физика“, която да позволи създаването на „двигател“, който дасе включи и да заработи от самосебе си. Инструменти, а не двигатели.

Read More

Saturday, June 26, 2021

// // Leave a Comment

Заблуждения для искуственного интеллекта профессора Сергея Савельева: нет, вещественная морфогенеза не нужна для процесса мышления в машине

Если вы попали здесь, наверно вам известен профессор Сергей Савельев, нейробиолог, еволюционист. Автор книг для мозга, человеческая еволюция: Нищета мозга, Морфология сознания, Церебралный сортинг и др. Он популярная личност в радио передачи и в Ютюбе.

Замечание: Мой русской язык немного "экспериментальной"* и иногда вероятно я употребляю "упрощенный " запис - извините для ошибки. Я не хотел написать эта статья по английскому, либо по болгарскому, потому что я полагаю что профессор мало известен вне русскоговорящих стран.

Комментарий о этого и другие передачи:
"Вычисление структуры мозга человека", С.Савельев, 5.11.2019 г. 

https://youtu.be/53QqkRt2crI?t=1011   (~16-17 мин)


Тодор Арнаудов:

Савельев крутой, но иногда он очень буквалнoe и наглядное рассмотреть на понятия и для будеть всегда "правой" он мешает разных слоев абстракции через софистических методами, извините меня. 

Нейроны в мозге тоже не разумеет ничего - как електронных машин либо программы либо что-нибудь - они только "клетки" - в этот абстракция; либо только "набор моллекулы", "атомы и електроны" - "какая мысл здесь?". 

Если хочем, мый можем тоже сказать: "мозг" как физическое тело ничто не "разумеет": он только "материя". Он только "живёт", "метаболизирует", "существует", "стареет" и т.д. Того что разумеет: это человек, ум, сознание и т.н.: все это абстракции которой мы придумали: "этот значит этого, следовательно"...

 Кроме того, спорное кто чего "разумеет" в действительности, также как точно "разумею" определяется как четкое понятие - для Савельев, когда оно связанное с человеком, оно разумеется "ясное"?, кроме того он часто объясняет насколько тупы были большинство людей - "еда, доминантность,...".

 "Связ" это абстрактное понятие, не надо никаких специальных "физических" связи создаются - мы тоже знаем, что нейроны тоже не связаны физические (молекуллы тоже - всегда ест празное пространство). "Связ" это абстракция - когда клетки активируется "одновременно" либо в цепочка для определенное время; когда клетки разположенны одно до другая на "минималное расстояние" и т.д.: разные вида "корреляции". 

Ест вероятностны алгорифмы, себеизменяемы алгорифмы, алгорифмы меняющихся с данныйми (все обучающихся машин) и т.д. - данны определяет и меняет "связи" (корреляции), не надо запрограммирует все связи буквално и не надо "знать" их.

 Когда система достаточно универсальная и гибкая, когда она покроют все возможный сенсорный и двигателний модалносты и их содержание: всевозможный изображения, звуки, движения, всевозможный понятия и т.д. (система для представление всего мыслимое) - тогда она не "узкая" и не "фиксированая", она изменяется с средам и настолько сложная, сколько сложная её среда.

~ + 27:33 Проблемы моделирования мозга



Их определения для исскуствены системи... чуть исскуствены, сделани для оправдание его утверждения.

Правда что памет для фиксировани нейронни сети имеет постоянной размер (tf_model.h5 ... 1.33 GB ..., 347М параметров 32-бит FP) и они не имеет претензии моделирование мозга - они только моделирует и оптимизировать функции. Ничто не ограничаеть разработчик соединят сколко угодно и какая-нибудь более большая система, сочетана с других методами; кроме того этот размер можеть быт очень большой, триллионный параметров сегодня: GPT3 и другие. 

Возможности мозг тоже ограничени, нет безграничное количество нейрони и связи. Компьютер имеет намного больше потенциально неограниченная и более быстрая памет: Интернет в целиком. Каждый бит потенциально может быт другой за секунд, мили-, микро-, наносекунда. В мозге не так: какое время будеть потерят для научение несколко слов на чужом языке? Насколко усилия нужни для построения новые связи в мозга?

Энергопотребление в машин не обязательно "общо ограниченное", оно тоже приспособляется в современный систем. Решения даже в "тупои" нейронны сети не линейные: профессор надо смотрить "нелинеен елемент", он обязательной для обучение сеть:
"С точки зрения математики, обучение нейронных сетей — это многопараметрическая задача нелинейной оптимизации." (Википедия)

Комбинаторика - это что творец или исследователь человек тоже делает, однако люди часто не могут разумеят этого - обычно они кто не творят и не знают творческый процесс, проверка на то  что работает и что нето и т.д.; тестирования разни вариантов, искание в пространство возможных развитиях; прослеждение истории и т.д. разные вида исчерпывающо искание.

Что касается "двойственность сознание" - да, наличие "чувственная" и более простая первичная система вознограждения и управления ~ обучение с подкреплением, базовы  нужд и т.д. - это очевидно в поведение и интроспективно, это не профессорого откритие, извините меня. 

Не надо знать вообще обо мозг и лимбическая система (анатомически, физиологически) для знать для существование первичны и потребности и т.д. Они нужни для развития разумного существо чрез взаимодействие с средам "с нуля" и для общое направление: так как делает человек (ребёнок) и т.д., но это не Савельеву откритие, извинете меня; люди это знали и писали для этого, включая в литературе для искуственного интеллекта. Мои работы обсудили этого когдо я был юноша: это было очевидно.


Автор: Тодор Арнаудов, автор первого в мире университетский курс Универсальной искусственный разум (Artificial General Intelligence), 2010.

Смотрите (по болгарский) https://artificial-mind.blogspot.com/2019/08/AGI-at-Plovdiv-2010-vs-MIT-2018.html  и т.д. в этом блоге.


* "и" (возможно "ъй") наместо "ы" и др.

Read More

Tuesday, May 4, 2021

// // Leave a Comment

TensTorrent and Cerebras - Novel Advanced Computer Architectures for Deep Learning/Neural Networks Training

Interesting systems to check:

TensTorrent Startup: https://www.tenstorrent.com/technology/

Their architecture offers more independent parallelism of the cores than the current GPUs/TPUs, or as they express it: "Fully programmable architecture that supports fine-grain conditional execution, dynamic sparsity handling, and an unprecedented ability to scale via tight integration of computation and networking"

Cerebras Systems: Building the World’s First Wafer-Scale Processor

https://youtu.be/LiJaHflemKU

An amazing hugely-multi-core processor with 1.2 Trillion/2.6 Trillion transistors, with defect redundancy (the system can tolerate the presence of some defected cores which are expected) and inter-chip buses, which covers an entire silicon wafer. Their first version CS1 claims 400000 "AI-Optimized" cores, while the second CS2: 850000 cores, also 18 GB and 40 GB respectively on-chip memory, accessible in one clock. The die size is 46225 sq.mm compared to 815 sq. mm and 21.1 Billion transistors for a given biggest-die GPU at the time (2020).

How The World's Largest AI/ML Training System Was Built (Cerebras)
6857 показвания18.07.2020 г. NASJAQ

https://youtu.be/Fcob512SJz0


https://cerebras.net/product/

Read More

Monday, April 19, 2021

// // Leave a Comment

Тошко 2.075 - безплатен синтезатор на реч, говореща програма | Toshko 2.075 Bulgarian TTS engine

Поправка на малък бъг, свързан с работата при липса на Интернет връзка.
Виж и изтегли от: https://github.com/Twenkid/Toshko_2

Read More

Saturday, April 17, 2021

// // Leave a Comment

Is the new theory of the "Thousand Brains" by Jeff Hawkins a new one? The book "How an 18-year OId Prodigy Researcher Preceded MIT and Stanford by 15 years in AGI"

The second book by Hawkins was published in March: A Thousand Brains: A New Theory of Intelligence Hardcover – March 2, 2021 by Jeff Hawkins (Author), Richard Dawkins (Foreword) https://www.amazon.com/Thousand-Brains-New-Theory-Intelligence/dp/1541675819

It might really be new regarding the neocortex, but I didn't find it so new reg. AGI and epistemology. In brief, all these directions, working with universal/general "reference frames", 3D-space, search for coordinates and their relations follow directly and obviously from the sensorimotor school of thought and they have been expressed in many places, for example I'd say even in "Five basic principles of developmental robotics" by Stoychev (see the discussion in the blog), my older than that theory and publications and Hawkins' own earler book "On Intelligence" (2004). 

 For a more extensive review of why these ideas do not seem so new see my upcoming book "How an 18-year old prodigy researcher preceded MIT and Stanford by 15 years in AGI", originally in Bulgarian: "Как 18-годишен информатик от Пловдив изпревари MIT и Станфорд с 15 години във всеобщия изкуствен интелект, или Универсален изкуствен разум: Artificial General Intelligence".

It is a call for collaborators, cofounders, researchers, supporters, sponsors for the creation of an interdisciplinary institute about which I wrote in my teenage essay 18 years ago in 2003: "How would I invest one million Euro to achieve the best progress of my country" (see in the links) - MIT and Stanford created interdisciplinary institutes with somewhat similar headlines in 2018 (MIT claiming one BILLION $ investments), the famous Bulgarian AI researcher Martin Vechev, leader of the startup "DeepCode" for code synthesis, together with Bulgarian Academy of Science are also about to create an academic AI institute and infrastructure in Bulgaria.

My book is a historical documentary collection of various works, publications and comments, full or excerpts, mainly mine and references to other contemporary researchers' and other articles about the topics, announcements about events etc., presented in chronological order, with added additional notes. 

The work was in a somewhat "complete" state since August 2020, but I've been hesitating how exactly to present it. It seems unlikely to get it printed commercially: it's huge, in current formatting 1028 pages A4 in the full version. It could be condensed a bit due to some deliberate repetitions (for finding some texts in a shorter and longer chronology). However I don't see interest for now.

The work on the book was on hold, until last month when I added a lot of material and editions, including notes on that new Hawkins' book and other discoveries of "new" ideas in AGI which happen to be repeating ones expressed in my early 2000s works, such as one 2021 Google Brain researcher's generalisation about what general intelligence is about - see in the book; I may post an article in the blog as well.



First title: "Is the new theory of the "Thousand Brains" by Jeff Hawkins a new one? - and an announcement about the yet unpublished book How an 18-year OId Prodigy Researcher Preceded MIT and Stanford by 15 years in AGI" - Edit: shorten
Read More

Friday, February 26, 2021

// // Leave a Comment

Consciousness Prior and Causality - matches of Bengio's 2017-2018 example and ideas with Todor's "Theory of Universe and Mind" from 2003-2004

See especially my example with the thrown coin, whose trajectory and future we predict with absolute precision at linguistic level, verb-noun, sentences, and humans believe that this proves their free will, but it is true only due to the very low bandwidth/bit-rate; compare it to Bengio's example, where he throws a little piece of paper - "if I try to predict... it is very hard ... but I could predict it is going to be on the floor". Etc. Point 6 in the 2004 work below, also section 14. in the 2003 Part 3.
My examples and definitions are broader and more philosophical, they are part of the emphasis in my Theory that the core of intelligence is prediction of the future (will, causality) and compression; about the "compositionality" (see Bengio's talk also), the ranges of Resolution of Causation and Resolution of Perception, in which mind operates, and that there are degrees, in the terminology of mine - there are virtual universes at different levels; "levels of abstractions/generalisations". The examples in my discussion are also about the notion of "free will", using the information bandwidth for showing that the "free" component of our and "conscious" will (causation power) is ridiculously low - just a few bits per second.

What B. calls "consciousness prior" is a higher level top-down direction/drive, reducing the search space (yes, "attention" as B. mentions), a high degree of compression and operation at low resolution, searching for matches and reducing the resolution of perception and causation to as low as allowing for complete match/prediction with the maximum target resolution for that virtual universe etc. These are the cognitive aspects, they do not require the transcendental ones of consciousness, qualia, subjective feeling etc.
For original sources in Bulgarian and other translations see below the comparison table.

From Deep Learning of Disentangled Representations to Higher-level Cognition

52 735 показвания
•  9.02.2018 г.
Refers to "The Consciousness Prior", Bengio 2017: https://arxiv.org/abs/1709.08568

 https://youtu.be/Yr1mOzC93xs?t=2486


Bengio's presentaton Slide @ 38:29:

* Conscious thoughts are very low dimensional objects, compared to the full state of the (unconscious) brain.
* Yet they have unexpected predictive value or usefulness

- strong constraint or prior on the underlying representations

  * Thought: composition of few selected factors/concepts (key/value) at the highest level of abstraction  of our brain
  * Richer than but closely associated with short verbal expression such as a sentence or phrase, a rule or fact (link to classical symbolic AI & knowledge representation)





Yoshua Bengio, 53 years old, 1.2018

Turing Award, MILA leader, "AI godfather"
Talk at Microsoft Research :
"From Deep Learning of Disentangled Representations to Higher-level Cognition"


Todor Arnaudov, 18-19-years old, 2002-2004


"The Sacred Computer" AGI e-zine* (the original is in Bulgarian)


Regarding unsupervised ML models for speech, not recognizing phonemes properly, because, B. argues, their information content is very low, compared to that of the raw audio:


37:06: "How is it that these models haven't been able to discover them and then see that there's like this really powerful part of the signal, which is explained by the dependencies between phonemes?

And the reason is, I think, simply that that part of the signal occupies very few bits in the total number of bits that is in the signal, right? So, the rows in this signal is 16 thousand real numbers per second. How many phonemes per second do you get? Well, I don't know, 10, right? or maybe 16.

So there's a factor of a thousand in terms of how many bits of information are carried by the word level, phoneme level information versus the acoustic level information."

On "Consciousness Priors"

"And the prior is that, there are, the assumption about the world is that, there are many important things that can be said about
the world which can be expressed in one sentence,
which can be expressed by a low dimensional statement. Which refer to just a few variables.
And often they are discreet those
we express in language, sometimes they're not. We can like draw or somehow, use that to plan.
But they are very, very low dimensional.
And it's not obvious, if priori that things about the world could be said that are true and low dimensional.
41:29

"If I try to predict the future here, there are many aspects of it that are hard for me to predict.

Like, where is it going to land exactly?

It's very, very hard to predict,right? It's a game. But, I could predict that it's going to be on the floor. It's one bit of information. And I can predict that with very, very high certainty and a lot of what we talk about are these kinds of statements. Like, if I drop the object, it's going to end on the floor, in this context. So, this is the assumption that we're trying to encapsulate in machine learning terms with this consciousness prior idea.


42:10"
"Theory of Universe and Mind - Part 4", 2004

(...)

6. The Resolution of Causality-Control (RC) describes the capability of a Causality-Control unit to output data from its memory (its universe) to the memory of the mother universe in a way that the changes in the mother universe to be closer to the smallest possible changes in the mother universe and most close to the expected ones.

The Resolution of perception (RP) shows what features from the mother universe are perceived (distinguished) by the evaluating unit, which is a subordinated universe.

When a person decides to throw a coin and executes that action, she thinks/assumes that (...) she has done "what she wanted to do".

The Resolution of Causality-Control and of the Perception in that case is described by verbs, nouns, adjectives, prepositions etc. parts of speech of the language of the beings possessing general intelligence. That language, called also "natural language", describes the way by which the human mind perceives the world and it is limited by the narrow information bandwidth, accessible to humans.

The linguistic description gives a sense of freedom to the human mind to do "whatever she wants" [namely] due to its low resolution and the low criterium of the precision of the execution of "what she wants".

For example, the Resolution of Causality-Control and Perception in the example above is verb-noun. (...)

However the resolution in the mother universe, where human mind is defined, is way higher, because the Universe is not built of coins and humans, the interaction between which could be described with insignificantly low number of linguistic elements such as:

I throw a coin on the floor.
I throw a coin on the table.
I throw a coin behind the sofa.
I throw a coin through the window.
I throw a coin in the toilet.
I throw a coin in the corridor.


27.8.2002, a letter in "Theory of Universe and Mind - Part 2": "A human can output merely several tenths of bits per second [consciously]..."



Theory of Universe and Mind, Part 3, published 8.2003, "The Sacred Computer" #25: (...)

[See also the definitions of a "Control-Causality unit" etc. - the quote would become too big.]
14. "We", whatever we are, control a very little part of us. Say, we order our hand to throw a coin, and that consists of a sequence of simple instructions, sent to the muscles of our hand; the muscles consist of a huge number of particles, for which the control unit (the human; the matter about what we are conscious of) doesn't have knowledge and "we" ["the consciousness"] as that control unit cannot apply its power upon them individually; the resolution of its power is limited.

The muscles flex, that way they pull the bones and the whole fingers. Therefore, in fact the parts of the body, which we [believe] that we do "control" [cause wanted changes to with predicted and wanted target state and precision] do a big portion of their job on their own, i.e. "they know their job", and we - our consciousness - has only a superficial image/representation (представа) of that "job".

For example, the instruction with which we order the finger to flex, is described by, say, a few tenths of bits. The evaluation, of course, depends on the way we measure it.

We could converge the description down to: E.g.: which hand (1 bit) + which finger (2.3 bits) + to flex or to extend (1 bit) + force (I don't know how many degrees) + time for activation of the force.

The conscious information could be counted in bits by the fingers of the hands, while in order to flex the finger in the Universe, in the Main Memory, the whole information, which describes the finger and the connected devices to it from the hand, which motion pulls the finger, should be translocated - muscles, tendons; blood vessels, which feed it etc. - all that should go a particle by particle... I have no idea how many bits would take to define, an atom by an atom, just a finger...
(...)

51. The more sophisticated a device (entity) becomes, the more the capabilities for prediction of the future grow and the more it evades unpredictable and random states, i.e. states characterized with lack of a desired information. The more sophisticated an entity becomes, the more it employs the past, the memories, on order to build its behavior in the future, because it discovers that the past has patterns, therefore the future is predictable.



Compare "closely associated with short verbal expression such as a sentence or phrase" [slide] with "For example, the Resolution of Causality-Control and Perception in the example above is verb-noun ..." etc.


Regarding throwing etc. - the full prediction is not only where it will land exactly, but the entire trajectory of the object, where in both cases there could be resolutions, at best: all steps at the highest possible resolution of causation and perception for the "mother Universe", i.e. that would invole Planck constant scale etc.


http://eim.twenkid.com/old/eim18/predopredelenost2.htm
http://eim.twenkid.com/old/3/25/pred-3.htm
http://eim.twenkid.com/old/4/29/pred4.htm
http://research.twenkid.com/agi/2010/en/
http://research.twenkid.com/agi/2010/en/Todor_Arnaudov_Theory_of_Universe_and_Mind_3.pdf
http://research.twenkid.com/agi/2010/en/Todor_Arnaudov_Theory_of_Universe_and_Mind_4.pdf
http://research.twenkid.com/agi/2010/en/Todor_Arnaudov_Theory_of_Hierarchical_Universal_Simulators_of_universes_Eng_MTR_3.pdf


The "Sacred Computer" works also have copies in the geocities archive oocities etc. and maybe also in archive.org
Read More

Friday, January 15, 2021

// // Leave a Comment

Срещу очернянето на изкуствения интелект и свръхчовечността (трансхуманизъм) от социални учени и др. пропагандисти

Коментар на Тодор в отговор на интервю на проф. Иво Христов, който и в други случаи е използвал подобна реторика, подобно на Олга Четверикова, Иван Спирдонов и др.


Това са англосаксонско-западноевропейските интерпретации на свръхчовечността (надтелесността, космизма), или "трансхуманизъм". Българската свръхчовечност и Универсалния изкуствен разум нямат такъв дух и такива цели.

Една обща логическа грешка, която допускат много хора свързана с "човешка природа", "човечество" и т.н. са, че това са заблуждаващи и мъгливи понятия. Миналата година публикувах книга, която подробно разглежда явлението, отчасти неправилно образуване на понятията, включително употребата на "трансхуманизъм" и "изкуствен интелект" за назоваване на неща, които са нещо друго и са си съвсем човешки:

ЗАБЛУЖДАВАЩИТЕ ПОНЯТИЯ И
РАЗБОР НА ИСТИНСКИЯ ИМ СМИСЪЛ:

Трансхуманизъм, Цивилизация, Демокрация, Хуманен, Хуманизъм, Дехуманизация, Социална дистанция, Политическа коректност, Фалшиви новини, Евроинтеграция, Глобализация, Европейски ценности, Либерализъм и други

Бр.33 на сп. "Свещеният сметач": http://eim.twenkid.com

"Човешка природа" (без уточнения) и "човечно" са както нацистките жестокости и опити за създаване на "по-висша раса" (все пак - "за доброто на човечеството" [според тях]), кланетата при всяка война (те се вършат и от обикновени хора [т.е. са или могат да бъдат масово явление и на индивидуално ниво]), така и грижата за беззащитни животни, любовта към природата [децата, животните, беззащитните] и т.н.

Т.е. това е безсмислено и заблуждаващо понятие. Без прецизно определение при всяка употреба, подобно на "демокрация", "цивилизация", [то] има пропагандна цел или е повторено от някъде. Дали хората от третия свят са щастливи, че са "част от човечеството" (дали въобще разбират и как разбират това понятие)? Винаги е имало "различни човечества", дори и в биологичната си форма има огромни различия между индивидите, както умствени, така и във физически способности - от пигмеите в Африка, от посредствените или умствено изостаналите или хора с увреждания, до свръхнадарените хора; от болнавите, до столетниците и т.н.

Невробиологът Сергей Савельев например често обяснява, че разликата в мозъка между индивидите (в разпределението на различните функционални зони) е по-голяма отколкото междувидова в животинския свят, някои зони се различават по размери до 40 пъти, други по няколко пъти, някои са налице само при гении.


Думата "човечество" се употребява именно от тези зли трансхуманисти от англосаксонския свят, които с всякакви технологии са имали един и същи подход и цел: завладяване и господство над останалите (маймунски нагон), изкуственият интелект няма нищо общо с това, и той в подходящи ръце може да им попречи.


Науката и медицината винаги са се опитвали да "променят човека", медицината по определение има за цел да удължава човешкия живот и в нея и във всякакви свои дела "човек се прави на Бог" - това не е никакво откритие или ново "дяволско" желание на "трансхуманизма", - човек винаги е искал да живее вечно в същата личност, същата е целта и на Християнството, това се нарича "Спасение" и "Вечен живот" и не е по-различно от идеята на Свръхчовечността [надтелесността].


Под "свръх" в източното православно мислене се разбира: "На когото повече е дадено, от него повече ще се изиска". Свръхчовечност не значи задължително свръх-поробване - зависи в чии ръце е "свръх"-а. Ако е в ръцете на онези, които винаги са постъпвали по този начин, ще е поробване. Ако е в други ръце - ще е повече любов. 

Човек във всяко свое творческо дело "се прави на Бог", не само в най-грандиозните. Различната организация на обществения живот не е "изкуствен интелект", а е именно организация на този живот и се извършва под ръководството и в изпълнение на хора (вид примати) и за задоволяване на техните "човешки" (маймунски) нужди и цели, една от които е маймунската им воля за власт над останалите (вижте шимпанзетата), един от първичните им нагони.


Всички жестокости [досега] са извършени от човешки същества (хомо сапиенс), всички психопатични закони и кланета са извършени от човешки същества, за техните "възвишени" цели.


[Не следва ли от това, че] Хората, елитът им, обществото, държавите, обществените организации като църкви като цяло, са "изроди" и зверове в някои мерки според собствените си определения за това, [понеже] нарушават по най-циничен начин собствените си "морални ценности" и убиват, лъжат, крадат и т.н.

Така че ако част от тоя "човешки" (маймунски) елит иска да изтреби по психопатични подбуди част от другите - в това няма да има нищо ново и [решението им] не е свързано с технологиите, а именно с "природата на човека"... [тъй като те все още са човеци.]

(Ранно пълно заглавие: Срещу очернянето на изкуствения интелект и свръхчовечността (трансхуманизъм) от социални учени и др. пропагандисти заради препредаването на англосаксонски мироглед и ценности, чрез употребата на объркани заблуждаващи понятия и заради определени приложения на технологиите)
Read More