Hello! I'm Todor, a.k.a. Tosh and Twenkid - a Universal man, author of the world's first interdisciplinary university course in Artificial General Intelligence - whopping 8 years before the famous MIT course of the now celebrity podcaster Lex Fridman; I am a Researcher, Developer and Entrepreneur in AGI, where I was a child prodigy and visionary as early as my teenage years in the early 2000s, beyond the expected computer science and also linguistics/writing, in the fields of Transhumanism, Digital Physics/The Universe as a Computer / Discrete Universe, Philosophy of AI, Mind and Universe as simulators of virtual universes, the tight connection and mapping between the principles underlying the Universe as a whole and systems, and Mind/General Intelligence. My works: the Theory of Universe and Mind were published in one of the first e-zines for these topics, called "The Sacred Computer", which I created myself. I keep encountering my discoveries, generalisations, ideas and directions repeated and reexpressed as fresh or interesting by many top-level researchers, up to now, 2023 (one of many is the Free Energy Principle/Active Inference line of research etc.) I started to discover the matches since 2007, with Jeff Hawkins's "On Intelligence", many others came later. See and read more in About and in the links, where you can find the original writings as well. I've been working on a huge collection book, currently called "Artificial General Intelligence and Transhumanism: History, Theory and Pioneers", which keeps growing, currently above 1240 1600 pages and growing, which explains, demonstrates and points out the matches to the Academic etc. research published after those early publications, which indirectly serve as a delayed "peer review" - or a call for you to join me in my quest for AGI. Check also my project: the AGI infrastructure called "Vsy" or "Jack of All Trades" and the other projects in Github.
Welcome to my "Universal Universe": "Artificial Mind" or "Sacred Computer". I am always looking for partners and collaborators, interesting project and new fields and things to study, explore and create. Join me or invite me!

Monday, November 27, 2023

// // Leave a Comment

On Understanding and Calculation, Quantitative and Qualitative Reasoning: Where Calculation begins Comprehension Ceases: Part II

 Comment on:

"Dr. Jeffrey Funk • 2nd•  Technology Consultant

Nobel Laureate Richard Feynman understood the difference between knowing and understanding, which is explained in this five-minute video. Knowing is being able to do calculations that agree with experiments. Understanding is being able to explain the underlying phenomena.

 

As Feynman describes, the Mayans knew positions of the moon and could predict eclipses, but they didn’t understand the reasons for their correct calculations. That understanding did not come until Newton and others explained gravity and its impact on rotating bodies. And the lack of understanding allowed the Mayans to falsely attribute things to gods, and not to physical laws.

 

Many data scientists and other proponents of AI point to knowing, being able to do calculations. Their #algorithms can predict the prices of homes and infections in hospitals, and match applicants with jobs or a camera’s input with a standard image. But they do not understand the why of their calculations or predictions. And when the calculations and predictions are wrong, they don’t know why. And unless they also have an understanding, which some call explainable #AI, the systems may always perform badly. Achieving high precision predictions or matching two things will always require understanding, not just knowing."

https://www.linkedin.com/posts/dr-jeffrey-funk-a979435_algorithms-ai-technology-activity-7133763690717224960-fMbK?utm_source=share&utm_medium=member_desktop

....

 

Todor Arnaudov - Tosh/Twenkid:

This is very similar to Arthur Schopenhauer, early 19th century: "When calculation begins, comprehension ceases.", OTFFR... His PhD thesis more than 200 years ago. A more modern term is "grounding" or "sensory-motor grounding".


" To calculate therefore, is not to understand, and,

in itself, calculation conveys no comprehension of things.

Calculation deals exclusively with abstract conceptions of

magnitudes, whose mutual relations it determines. By it

we never attain the slightest comprehension of a physical

process, for this requires intuitive comprehension of

space-relations, by means of which causes take effect."



See more in the part I from 2014: Where calculations begin, comprehension ceases* - on understanding and superintelligent AGI. Todor's comment in "Strong Artificial Intelligence" at Google+ https://artificial-mind.blogspot.com/2014/08/where-calculations-begin-comprehension.html


Chomsky's views on DL and LLM is similar as well, see MLST Youtube channel's episode "Ghost in the machine", 2023 etc. or even the "classical debate" with Peter Norvig from 2011, "Norvig vs Chomsky", "The Norvig - Chomsky debate": https://norvig.com/chomsky.html


However "explanation" and "understanding" are not explained in Feynman's wordplay either. What is to explain and explain to whom? If the receptor is not "appropriate" to your explanations, you will always fail. (See example videos on Youtube of "Explaining a concept at different levels: a child, a teenager, a college student, a graduate...", such as: Theoretical Physicist Brian Greene Explains Time in 5 Levels of Difficulty | WIRED: https://www.youtube.com/watch?v=TAhbFRMURtg ... The calculation models also "explain" and they eventually map to something different than the plain numbers or "abstract quantities" (unless the evaluator only reads the numbers), but they are too "shallow" (for a decided measurement) or the "reader" doesn't... understand them, she can't make some "other" predictions or make some new conclusions - all that she has expected that she should be able to do "if she did understood", "if it was well explained" - and she can't connect the evidence on her own - as she expected she should be able to do. Even if you "just" calculate the trajectories of the planets, without knowing "the reason", it eventually maps to objects known as "planets" and there are some additional conclusions such as predicting that particular stellar bodies will not collide etc. DL models may "say": "this is so, because: see this chain of vectors, 0.34, 0.23, 0.55, 0.343 ... at level 0.34, 0.344, 0.33 ... it is bigger than ... etc. deconvolve here, convolve here with this data point (they lose the records/path of thought of learning, IMO a better cognitive system remembers more details about the path and the mapping and should/could be able to find it).


There is not so sharp distinction though, because the "calculation"-(quantitative-reasoning)-lovers may object, that the other ones' "explanations", if lacking calculation/math part, are "just qualitative", and qualitative-only theories or positions/statements etc. are supposed to have lower "resolution"/explicitness and usually are/may be not practical or can't be applied in specific scientific methods. They are just "methodological", but not "methods" as explained in one Bulgarian philosophical article by Sava Petrov*.


There should be *both* qualitative and quantitative models and representations, and reasoning is also a kind of "calculation", but the mappings to the "real" data is supposed to be preserved somewhere in order to connect to the "real" world (and to "explain" to the "user" sensory-input compatible modalities).


As the aforementioned physicist Richard Feynman also explains once in a BBC documentary, when he digs into the answer of "Why ice is slippery"(?), the "Why" questions can go deeper and deeper. There could be different depth and precision of "understanding" (or mapping to something else). Also the causality in general can go wider and wider in time, space and resolution up to the whole Universe and its whole universe. There is a limit, e.g. in Karl Friston's FEP/AIF it's "Markov blankets" which are taken as "impenetrable". There's a cut, a resolution of causality-control where the limit is set, and a model/expectations/causes which are accepted as "good enough" and somebody, the observer-evaluator decides, accepts, that "this is explained" or "understood", why when there is less depth, less steps, smaller range etc.: it is considered "not explained". See "Theory of Universe and Mind" by Todor Arnaudov, early works 2001-2004. One particular work with a lot of topics: "Analysis of the meaning of a sentence, based on the knowledge base of an operational thinking machine. Reflections about the meaning and artificial intelligence (...)", 2004  Translated in English in 2010 in 4 parts. https://artificial-mind.blogspot.com/2010/01/semantic-analysis-of-sentence.html 
https://artificial-mind.blogspot.com/2010/02/causes-and-reasons-for-any-particular.html  https://artificial-mind.blogspot.com/2010/02/motivation-is-dependent-on-local-and.html https://artificial-mind.blogspot.com/2010/02/intelligence-search-for-biggest.html TOUM continues with "Universe and Mind 6" (not published yet).

Compare "Analysis.." and TOUM with the later published theories and ideas, practical work as well (in RL, lately Active Inference, see the post about "Genius" platform from Verses AI), which confirm the claims and reasoning of the former, see:

https://github.com/Twenkid/Theory-of-Universe-and-Mind/


Read More

Saturday, November 25, 2023

// // Leave a Comment

Genius by Verses AI - Intelligence as a Service for Multiagent Systems with Free Energy Principle/Active inference framework and the Bulgarian Blueprint announcement

Verses AI - the company that develops multiagent systems, inspired by and implementing the Free Energy Principle/Active Inference by Karl Friston, releases a platform called "Genius".


https://www.youtube.com/watch?v=mIUcU5c-vEs FEP/AIF and the related work by colleagues and students of K.F. actually repeat, expand with volume and mathematical notation (and create maps to actual physics, neuroscience and more technical evidence from natural science, medical/neuroscience, physics etc. data), prove and continue the work on the main principles and ideas of my own Theory of Universe and Mind, with the classical pieces of the body of work published between late 2001 and early 2004. You can see that more clearly in the FEP/AIF literature, K.F. participations in podcasts (e.g. MLST) and the Active Inference Institute channel. The matches spring starting from the core of the principles: minimization of the prediction error at all levels of scale and the ubiquitous multiscale, multilevel nested simulation; in FEP/AIF it's "Markov blankets", in TOUM it is called causality-control units, virtual universes, subuniverses, submachines which run in a hierarchical Universal simulator of virtual universe - what both Universe and Mind are (or more strictly: could be represented and encoded as such). On specific comparisons and matches to other works as well see some already published notes: https://github.com/Twenkid/Theory-of-Universe-and-Mind/
One article about FEP when I first heard about it in late 2018: Ultimate AI, Free Energy Principle and Predictive Coding vs Todor and CogAlg - Discussion in Montreal.AI forum and Artificial Mind https://artificial-mind.blogspot.com/2018/12/ultimate-ai-free-energy-principle-and.html

Look forward for the "Universe and Mind 6" and the book about the Prophets of the Thinking machines, and check the github page for comparisons.  

* PS. Funnily their demo features a drone, one field where I was "professional" in a start-up lately, but I considered it was better to do it as a hobby as of the management there.  See the EZ drone brain experiments series, I don't have time to focus on it lately, but it will continue with ROS line of experiments.

* According to Yahoo finance etc. they were evaluated at about $100M (about 12.11), there's info about some funding round of $3M CAD.
** A few days later it jumped to 132 M, now 139 M:
https://finance.yahoo.com/quote/VERS.NE/key-statistics/
So my theory is on the rise.. Go, go, go! 

"The Sacred Computer" is lacking that luxury, yet, LOL, or any luxury, more than twenty years after I clearly expressed the direction about what to do in order to build generally intelligent machines. Thinking out loud, on the second thought though, a "consolation" is that if one aims to be young forever, the passing of the years shouldn't matter much. I guess: so far, so good with the goal and keeping within the "setpoint" in my "Forever Young" program.



I'm looking for partners as always:

https://github.com/Twenkid/Theory-of-Universe-and-Mind/

https://github.com/Twenkid/Vsy-Jack-Of-All-Trades-AGI-Bulgarian-Internet-Archive-And-Search-Engine

*** There's one American guy called Bryan Johnson: https://blueprint.bryanjohnson.com/*, who has similar goals and calls himself "a professional rejuvenation athlete". There are similarities, but some big differences as well, for instance I am "natty" and my "blueprint" which I am improving is very cheap. My diet is diverse  and consists of ordinary food from the supermarket and the grocery store and it is not strictly "clean" or "healthy" (by many standards) or only low glycemic index food. He admits he takes about 100 pills a day, including testosterone replacement (technically he's "on juice", "illegal" among eating a very special low-glycemic diet and applying many other special procedures etc., I only take cheap Magnesium and sometimes I skip (e.g. yesterday).

* Now that I revisit his site, it "welcomes" the visitor with a big ad for his brand of olive oil:
" Extra Virgin Olive Oil is more powerful than resveratrol, NR, cold plunge, sauna and your favorite podcast".  


Well, the "Bulgarian Blueprint" is not published yet and needs to be analyzed, 
 that's a topic for another conversation or videos. Also it is possible that it may work with people with similar genetics, as well as Johnson's program may work for his similar fellow Americans or Anglosaxon-like-origin people. 

One of the differences is that he claims that he aims at being "18 again", while I guess I've been always much younger than that in many aspects and in some aspects I'm more like in my early teens, and in others, e.g. as of endless curiosity, learning and development I am maybe still at 11 or 6 or .... See the definition of "Twenkid", which I coined back in 2008, whopping 15 years ago, which was a continuation of another term: "Yunak", which was coined another 7 years earlier: http://artificial-mind.blogspot.bg/2008/04/twenkid.html 

Stay tuned, join, collaborate, like, subscribe, comment, share and donate! (and laugh, of course)

Ring Dips Progression Workout #3 Towards Muscle Up






Edit: A little one about the "20 years" paragraph, 26.11.2023
Read More

Sunday, November 12, 2023

// // Leave a Comment

EZ Drone Experiments #2 - Flying in a warehouse with a depth camera


https://youtu.be/60w4a93LjX0

Gazebo Garden, Ardupilot, Python, Linux (Ubuntu 22.04 in WSL2)

Read More

Monday, November 6, 2023

// // Leave a Comment

Autonomous Drone Brain Experiments: E.Z. 0.001 for Vsy/"Jack of All Trades" AGI infrastructure

https://youtu.be/STm_WAlUJaI


Early experiments by "The Sacred Computer" (Свещеният Сметач): to be continued. What E.Z. stands for will be disclosed later. Vsy "Jack of All Trades" is a project for an AGI infrastructure.
https://github.com/Twenkid/Vsy-Jack-Of-All-Trades-AGI-Bulgarian-Internet-Archive-And-Search-Engine

I've been studying the drones and robotics simulation domain actively mostly during the summer, currently I'm more busy with more fundamental and abstract research*, but continuing the work on drone and robot simmulations as side and practical project and it is supposed eventually to turn into a ML playground.
Stay tuned for the updates and I may report more details during the SIGI virtual conference, it will be either around the end of 2023 or in 2024. I am looking for partners in this "adventure" (as in all others).
Made with Gazebo, Python, Linux (in Windows WSL2). Join my adventure or invite me for a common project: https://github.com/twenkid http://artificial-mind.blogspot.com http://research.twenkid.com

Read More

Saturday, October 28, 2023

// // Leave a Comment

Contributing to a Robotics Startup, Universe and Mind 6 and Theory of Universe and Mind and Calisthenics - update 10.2023

Hi guys, visitors of my "Universal man's and AI/AGI journey"! A lot of things are going on, but I've been too busy to blog about it.

One activity was that I contributed to a robotics, drone-related "garage startup", it was a hard core intensive "exercise", but I left after I helped securing a first investment by an individual investor and happy early customers. A stupid moment to quit, right? Well, unfortunately many things were not quite right since the beginning, to be delicate; they were quite wrong and not as they were supposed to be, besides neglecting my own research and projects.

I've been preparing a huge book about my pioneering work and comparisons with still "new" top research which is replicating the theory, structure, claims and reasoning from my early 2000s writings, collectively called "Theory of Universe and Mind". https://github.com/Twenkid/Theory-of-Universe-and-Mind   (For example the core claims and ideas from "Free energy principle/Active inference" by Karl Friston and his students such as Maxwell Ramstead and other related work from that school by Andy Clark etc. More about that - in the link of the TOUM and in the book

I worked on a new major piece from the theory, called "Universe and Mind 6" which is  also discussing some related works. It gained a lot of volume quickly during the Spring, it could be published then, but it went  on hold due to my business with the start-up and because I had a few more ideas which were popping up, but I didn't have time to elaborate, while I've been discovering new related theories etc.

...

The experience with that company seeded a new plant/thread in "Jack of All Trades" project - 
Autonomous vehicles, navigation for drones and mobile robots in simulations and real world. I am open for collaboration in this domain - see more info about my current skill in my Linkedin account or contact me. I may publish a demo later.



A "mini-conference" is in prepratation: SIGI-2023 (or 2024), the second mini-"conference":
https://github.com/Twenkid/SIGI-2023-1/

Etc.

In the meantime, lately I'm improving my strength with ring dips and kettlebells. 


Check my Youtube channel which currently has a hit having whopping ~ 250 views per day, LOL. 

Cheers while hanging on the rings, LOL.





Read More

Saturday, July 8, 2023

// // Leave a Comment

SIGI 2023 - Second "conference" of the "Society" of Multidisciplinary and Interdisciplinary AGI/SIGI Researchers SIGI-2023 - Invitation

 https://github.com/Twenkid/SIGI-2023-1

The details will depend on the partners and participants. Check the original 2012 "conference" which funnily happened at a hotel called "Intel Coop", LOL (Almost Intel Corp.)


As of my participation, probably the event would be related to my enormous book "The prophets of the Thinking Machines ..."  (it will already surpass 1500 pages), which is summarizing and proving how my teenage "Theory of Universe and Mind" was decades ahead of the current "ground breaking" interdisciplinary theories which discover or interpret the Universe as aencial/agent, all about prediction, see the repo:

https://github.com/Twenkid/Theory-of-Universe-and-Mind 

Perhaps I will present also the project "Jack of All Trades" or Vsy, an AGI infrastructure, for which I'm looking for partners.

There will be chapter on robotics and drones, as the SIGI-2012 where Svetlin and Daniel presented ROS and OpenCV and our spiritual leader, from a distance, was Dr. Peter Kormushev, morally supporting us. :)

Etc. 

Let's see who will join.







Read More

Tuesday, June 20, 2023

// // Leave a Comment

Connor Leahy is Rediscovering the Wheel in AGI 21 Years Later - How AI and AGI were defined? What is AI/AGI?





Todor - an AGI prodigy and author of the world's first university course in Artificial General Intelligence (Plovdiv 2010, 2011) challenges and comments the AI Alignment, AI expert and a modern AI prodigy Connor Leahy's opinions from a debate on Machine Learning Street Talk channel. The summary, GPT4-generated and posted by Tim Scarfe in the Discord channel and the youtube video:    • Debate On AGI: Ex...   What is intelligence? What is Artificial General Intelligence? How it would be created? The Theory of Universe and Mind, The hierarchical prediction, solving complex problems ... TO BE CONTINUED... Other links:
Lectures from the AGI courses 2010, 2011: http://research.twenkid.com/agi/2010/ The syllabus and info about the AGI course in 2010 and a video about it: http://artificial-mind.blogspot.com/2... One of the early AGI blogs (2007-...), continuing one of the oldest AGI and transhumanisms e-zines "The Sacred Computer" where the Theory of Universe and Mind was published between 2001-2004: http://artificial-mind.blogspot.com "The Sacred Computer" and links to the historic publications (below, in Bulgarian) http://eim.twenkid.com "Старата версия, бр.1-31 (2000-2005)" http://eim.twenkid.com/old/ Etc. - additional links from these ones.
Read More

Monday, May 29, 2023

// // Leave a Comment

Genius Beauty in Todor's Boy's Room Part I - Bing Image Creator DALL-E

 Check some of my and DALL-E’s masterpieces. :) 


https://youtu.be/u2M_FKGtF94

To be continued

Read More

Wednesday, May 10, 2023

// // Leave a Comment

2023 Update: Training GPT2-MEDIUM etc. from scratch and unlimited-length generation with hidden direction of the prompt for any generative models

 https://youtu.be/_CPDnyUjjWg

Training GPT2-MEDIUM from scratch on Colab. UPDATE: 6–5–2023: float32 fix (no mixed precision — error) and Chained-generation with unlimited length with directed prompt injection (see the other video from Todor’s youtube channel)

An update for my Youtube tutorial and notebook from 2021, after I discovered the code needed a little tweek to run again. I also added the code for the unlimited-length generation with hidden prompt injection.

The same idea could be applied for any generative model, if you have control of it and can inject prompt in the context when later to be hidden from the output.

GPT2, Collaboratory
#gpt2 #colab #python #promptengineering

Read More

Monday, April 17, 2023

// // Leave a Comment

The hardware and resources inequality in AI/AGI: an old story now rediscovered by the worried mainstream — a 2013 & 2009 articles vs 2023 paper

I start to publish in Medium as well - I had to to long ago, as it has a community and "social life", but:  better later than never. I may republish some of the articles here there in order to hopefully extend the appropriate audience reach.

Editing The hardware and resources inequality in AI/AGI: an old story now rediscovered by the worried… – Medium

The hardware and resources inequality in AI/AGI: an old story now rediscovered by the worried mainstream — a 2013 & 2009 articles vs 2023 paper

“Montreal.AI: 23 ч. · Choose Your Weapon: Survival Strategies for Depressed AI Academics Julian Togelius, Georgios N. Yannakakis : https://arxiv.org/abs/2304.06035
#ArtificialIntelligence #DeepLearning #MachineLearning

While it is true that even 8 years or 10 years ago even regular programmers could have the GPU power ( the well-paid and owning their time on the right target; but usually the ones who make money lack the vision and they buy GPUs/hardware for games, and the ones who had vision and intelligence had no money), the “inequality of opportunities” is of course not a new phenomenon, including in AI. I’ve written about it in 2013 and it was valid for the pioneer AGI researchers one of which was I, publshing substantial works since 2001, aged 17, and being author of the world’s first University courses in AGI in 2010, 2011 with theories and a course program that still stand and are only confirmed and elaborated by more and more researchers and publications. The inequality phenomenon was valid for the AGI researchers versus both the well-"fed" well-funded high-profile and famous academics who “rolled their eyes” when they heard about AGI (ask Hassabis, Legg; and Altman even about 2010-early 2010s in MIT, Altman refers to 2015 when they found OpenAI). It was vlaid versus any researchers from the Academia (with students working for them, “free” laboratories etc.), and of course: the industry.

https://artificial-mind.blogspot.com/2013/08/issues-on-agiri-agi-email-list-and-agi.html

A part of the conclusion of this work:


“… — WORKABLE THEORIES and IMPLEMENTATIONS


Some people try to work on workable theories and implementations, but this list is a home of the poorest and the most lonely ones in the AGI community, even though some of them were some of the pioneers of the new wave of that community, long before the “institutionalized” researchers took it as “prestigious”.


The list’s researchers poorness impedes their opportunities/motivation for concentrated work/producing academic-style materials — many believe the mainstream academic system (including many aspects of the peer-reviewed journals etc.) has intrinsic corruptions and have left it for “political” reasons.

Moreover, even if they do know how or have potential to develop working machines, this is a big effort that may take a lot of time before they could have a complete system — coded and running. If they haven’t produced visible results already, that doesn’t imply they wouldn’t do after years of collection of critical mass, as long as they could work.
Besides they are supposed to be 10, 100 or 1000 times more capable than the normally funded and organized ones from the academic/industrial competition. Current ones can’t afford visiting appropriate conferences or travel around research centers and are alienated.
They should have much broader knowledge and skills, acquire new knowledge and skills in a shorter time and work much faster, because:
 — they can’t afford truly focussed work — too much other troubles, too much sub-problems they should solve alone, a lot of wasted time in attempts to find partners or develop some “booster-funding” technologies, plenty of frustration due to the isolation and helplessness against all the problems [including the dumb financial etc. ones] they have to solve [implement] alone (or give up)
 — they do not have students, partners or “slaves” to give the dirty job to [or barely have, but it’s hard to motivate anyone without funding]

Overall, they should shoot 100 or 1000 targets with one bullet, or they “die out” [in the race]
Welcome to the list of the losers… :))
However some of these “losers”, due to the extreme requirements they face, may really be 50 or 100 times more productive or knowledgeable and non-conventional than the “ordinary” funded and supported competition, and may have guts and balls that the others lack.

Otherwise they should have given up, be part of the existing institutes — “institutionalized” — or from the “AI”. But they are not from those institutes, because when they proclaimed that “AI was wrong” they were outsiders already, heading towards new directions.

Furthermore, those brave ones are supposed to believe and find a way to make thinking machine possible on cheap, old and slow hardware, otherwise they should have another reason to give up to the supercomputer owners and the rich institutionalized researchers…”

The same about NLP:

“What’s wrong with NLP? Part 2”, 3/2009

https://artificial-mind.blogspot.com/2009/03/whats-wrong-with-natural-language.html


One other option for the academics, who are pretty wealthy but complain about that OpenAI, DeepMind etc. are wealthier:


Invent somethign that’s really innovative, different and more efficient. Everybody prefers to just spill in more hardware, make a little change and engrave her name for “new contributions” (what about the credit for the hardware designers and producers?), it was similar in 2000s with NLP: change one bit of some algorithm, produce an increase of 0.1% of some measure/bechnmark and there you are: “a new NLP model”, “moving the SOTA”. Why not building a new paradigm from the ground up. But yes, you can’t, because the dafault is that if you try, you won’t be accepted until you beat the competition and as explained above, in order to do and be accepted, you have to be 1000 times more efficient than them while working on your own with no resources. :)

Read More

Wednesday, April 5, 2023

// // Leave a Comment

Memory of the Visionary Research Directions from 2007's second blog post and a comment on the visual transformers and their representation

Looking back to the second post in this blog (after the first which was a placeholder)... 
https://artificial-mind.blogspot.com/2007/11/research-directions.html 

 // // Leave a Comment

Research Directions

Target research directions so far:


Research Directions
    • Artificial General Intelligence
    • Artificial Mind
    • Artificial Life
    • Cognitive Computing
    • Cognitive Science
    • Computational Linguistics
    • Data Mining
    • Computer Vision
    • Image Processing
    • Sound Processing

Main direction:

Understanding the processes of learning, thinking, imagination, problem solving, decision making and development of evolving, thinking and creative machines.

Sub directions:

  • Perceptions, mind states, thoughts, memories, imagination, desires, intentions etc. representation, simulation and generation.
  • Natural language understanding.
  • Natural language generation.
  • World-knowledge representation, world-physics and human behaviour simulation for NLU, NLG and for perceptions, thoughts etc. simulation.
  • Machine imagination and creative machines. Creative writing by machines. Dreaming machines.
  • Machine learning, based on world-knowledge representations and simulations evolved from the input.
  • Building world-knowledge and language competences by semi-supervised machine learning, using the web as world-knowledge feeder and language teacher.
  • Differential intelligence researches.
  • Didactics methods for measuring general intelligence of machines.
  • First lanuage acquisition by humans. Modeling language skills development. [lanuage = language]
  • First language acquisition by machines, which learn their knowledge, "corpora" and grammars like children do - by reading, analyzing and building new knowledge step by step with optional support of supervizing knowledge, given by human "teachers" or taken by the machine from ordinary textbooks and interaction with people on the Internet.
  • Conversation agents. "Chat bots", "Virtual bloggers" and "Virtual forumers" which do NLU, "imagine" what the conversation is about, have intentions and express thoughts about the topics, aiming to keep real conversation.
  • Intelligent Desktop and Network Search Engines, Intelligent Personal Organizers, Document and Notes Classifiers and Virtual Assistants.
Other directions:

Sound Processing:
  • Speech Modeling
  • Speech Synthesis
  • Synthesis of Singing
  • Speech Mimicry (extracting voice features from an input speech, then application in speech model and synthesis of speech with the same voice as the voice of the example).
Image Processing:
  • Advanced preprocessed image formats, assisting computer vision.
  • Memory and heuristics based generation of photo realistic images, without complete 3D-modeling and rendering.
  • Memory and heuristics based generation of 3D-models from single or multiple images.
  • Computer Vision - Image/object recognition, categorization, generation, combination. Bots and robots, moving in virtual 3D worlds, a real world or in hybrid 2D-3D world simulations like in Quest games, which percept the world by vision systems.
...

Regarding the Image processing, lately I've been playing with Bing image creator, DALL-E. I'll show pictures from my plays with it later, a comment of mine in an AGI chat two days ago:

Todor: (...) Also, "meaningfully selective" is questionable, in some POV transformers are  amazingly meaningfully selective, much better than humans in text-to-image, or "concept to image".

The generative models are better than humans in analysis and synthesis, especially with images, human synthesis capabilities with images for most humans are almost lacking at all, while DALL-E and MidJourney produce amazing and aesthetically pleasing photorealistic images which apparently are rendered by a process which is isomorphic to a classic rendering system that has an implicit 3D models, with a designer who places them in reasonable composition, with proper materials, lights and ray tracing or global illumination. Most humans struggle to draw even stick figures or in handwriting, how good and robust are their features, they are incapable to reconstruct the output. Average humans are capable only in superficial recognition and in evaluation of photorealism, if they have whole images in front of their eyes for inspection, and also the artistically gifted and trained could recreate some of the general principles of lighting, but painstakingly slowly or/and usually with references, photos etc., aan they would hardly struggle when light is interacting with refletive and transparent elements in the scene and if they lack referenes.

I.e. these transformers are far superior in that aspect of analysis and then synthesis of the "causes and effects" in the world of their inputs than even super talented humans, they have better articulated and mapped internal models of the visual physics of the light and the objects.

As of directly adjusting parameters of objects like texture-pixel-by-pixel or the tiniest 3D-model detail: humans also need explicit 3D-models and 3D-editors for that, such as Blender or 3D-Studio Max, which besides the structures of the brain have explicit meshes, triangles, materials etc. defined, and they adjust these details iteratively in a slow process. Given so  general representations and directions, the generative models are excellent.

Also, I remember an old note of mine: there could be different approaches to AGI, but at their latest stages and levels they are supposed to get more and more isomorphic and to converge, because they are supposed to work with and represent similar cognitive structures, and they start with similar ones.

Read More