Friday, January 1, 2010

// // Leave a Comment

I will Create a Thinking Machine that will Self-Improve (an Interview with Todor): Dreamers and adventurers make the great discoveries. The scepticists' job is to deny their visions, and eventually not to believe their eyes

An interview for “Obekti” magazine, november/december 2009.
See and download the original Bulgarian version:

[A note about the title: the editor's choice for the world "Self-Improve" in Bulgarian, literally translated would sound as "Will Self-Complicate", it "Will Make itself More Complex", because I explained, that the system would create more and more complex models of its sensory inputs [and motor outputs - intentions] and the "Seed" AI will collect complexity and it will make itself more complex.]

Todor Arnaudov's Bio:

Todor is 25, born in the city of Plovdiv, Bulgaria. MS in Software Engineering and BS in Computer Science from the University of Plovdiv (highest average grades); he was an intern at RIILP, Wolverhampton, UK where he studied Natural Language Processing. Todor started to play with computers as a boy, his first experiments with computer graphics and digital signal processing were as early as late 90-ies on his Pravetz-8M (Apple][e clone); he has developed a communication system for disk transfer between Pravetz-8M and a PC, based on sound frequency modulation-demodulation. Todor is an author of a Speech Synthesizer (“Glas”) and a context-sensitive English-Bulgarian dictionary “Smarty”, that participated in LREC 2008 and IMCSIT 2008 conferences. He was also a software developer, and a verification engineer in a semiconductor start-up.

Todor's biggest scientific thrill, though, is Artificial General Intelligence, and at the moment he's an Independent Researcher, aiming at founding a private research company. He's also an artist, a writer and an independent filmmaker and is searching for ways to fund his research by doing show business. In the “Researchers' Night” in Sofia's Technical University, he presented ideas from his Theory of Intelligence, that he has created as a teenager.

Todor Arnaudov: I will create a thinking machine that will self-improve Dreamers and adventurers make the great discoveries. The scepticists' job is to deny their visions, and eventually not to believe their eyes.

- Artificial Intelligence, or AI, is a wide field. Would you explain to the readers for example what is the difference between “Weak AI” and “Strong AI”?

AI is a science about systems that solve complex problems, which are assumed to require human intelligence. Weak AI solves specific tasks such as image and speech recognition, machine translation, self-driving cars. Strong AI is much more ambitious and it's aim is answering the general question – What is Intelligence? - and how to create universal systems, capable to reach and overpass humans in all cognitive aspects. Strong AI is called also Artificial General Intelligence or Universal AI.

- Did a particular event pushed you start to deal with the concept of Thinking Machine?

Yes, the movie “Terminator 2”, when I was 7. The concept of thinking machines excited me. As a teenager I had an inspiration – I wrote some SF and philosophical prose about AI and developed my own general philosophy and theory of the principles of Mind (intelligence) and the Universe. Yes, it was weird... I realized, that AI is a Universal Science and strategically the most important task, because solving it would be an accelerator of any possible research.

- Do you have colleagues in Bulgaria in this field?

Maybe yes, maybe no... Boicho Kokinov and Moris Grinberg are doing Cognitive Science at NBU, Sofia, they work on the cognitive architecture DUAL. A research laboratory called “Sphere” is doing sort of intelligence research, but the material I've read quite abstract. During my presentation in the Researcher's Night at Technical University of Sofia, I met Yordan Yankov from the Center or for Research of Global Systems; Yordan is working on his theory of intelligence and he mentioned about a special logic system, something related to Quantum Logic and Hegel's dialectic, if I'm not mistaken. Maybe you know about “Kibertron” - an intelligent humanoid robot project. They claim that they have a model of “natural intelligence”, but they require 5 million euros in order to implement it.

- Where do the researchers' efforts should be focused in order to achieve Artificial General Intelligence (AGI)?

First of all, research should be lead by interdisciplinary scientists, who are seeing the big picture. You need to have a grasp of Cognitive Science, Neuroscience, Mathematics, Computer Science, Philosophy etc. Also, creation of an AGI is not just a scientific task, this is an enormous engineering enterprise – from the beginning you should think of the global architecture and for universal methods at low-level which would lead to accumulation of intelligence during the operation of the system. Neuroscience gives us some clues, neocortex is “the star” in this field. For example, it's known that the neurons are arranged in sort of unified modules – cortical columns. They are built by 6 layers of neurons, different layers have some specific types of neurons. All the neurons in one column are tightly connected vertically, between layers, and are processing a piece of sensory information together, as a whole. All types of sensory information – visual, auditory, touch etc. is processed by the interaction between unified modules, which are often called “the building blocks of intelligence”.

- If you believe that it's possible for us to build an AGI, why we didn't manage to do it yet? What are the obstacles?

I believe that the biggest obstacle today is time. There are different forecasts, 10-20 years to enhance and specify current theoretical models before they actually run, or before computers get fast and powerful enough. I am an optimist that we can go there in less than 10 years, at least to basic models, and I'm sure that once we understand how to make it, the available computing power would be enough. One of the big obstacles in the past maybe was the research direction – top-down instead of bottom-up, but this was inevitable due to the limited computing power. For example, Natural Language Processing is about language modeling; language is a reduced end result of so many different and complex cognitive processes. NLP is starting from the reduced end result, and is aiming to get back to the cognitive processes. However, the text, the output of language, does not contain all the information that the thought that created the text contains.

On the other hand, many Strong AI researchers now are sharing the position that a “Seed AI” should be designed, that is a system that processes the most basic sensory inputs – vision, audition etc. Seed AI is supposed to build and rebuild ever more complex internal representations, models of the world (actually, models of its perceptions, feelings and its own desires and needs). Eventually, these models should evolve to models of its own language, or models of human's natural language. Another shared principle is that intelligence is the ability to predict future perceptions, based on the experience (you have probably heard of Bayesian Inference and Hidden Markov Models), and that intelligence development is improvement of the scope and precision of its predictions.

Also, in order the effect of evolution and self-improvement to be created, and to avoid intractable combinatorial explosion, the predictions should be hierarchical. The predictions in an upper level are based on sequences of predictions (models) from the lower level. Similar structure is seen in living organisms – atoms, molecules, cellular organelles, cells, tissues, organs, systems, organism. The evolution and intelligence are testing which elements are working (predicting) correctly. Elements that appeared to work/to predict are fixed, they are kept in the genotype/memory, and are then used as building blocks of more complex models at a higher level of the hierarchy.

- What exactly is done in the field? Globally, in Bulgaria?

Yet a few researchers and organizations are so confident to put officially that AGI is their goal, but the number is progressively increasing. Jeff Hawkins is probably the most popular guy in the field, he's author of the famous book “On Intelligence”, explaining his theory of intelligence. Jeff is a founder of a neuroscience institute, focused on the neocortex, and his company Numenta is working on a new computer architecture, inspired by the neocortex – hierarchical temporal memory, implementing so called memory-prediction framework. Another important figure in AGI is Ben Goertzel - an author of  numerous books about intelligence. Ben is trying to build an AGI in his company Novamente and plans to use virtual worlds of massive multiplayer games to teach it. Boris Kazachenko investigates intelligence as a universal algorithm for generalization and cognition as a part of the meta-evolution of the Universe, he's developing a theory of intelligence. If you want to join the AGI research community, you should consider also the work of Juergen Schmidhuber, Markus Hutter, Tomaso Poggio, Hugo de Garis. The Singularity Institute organizes a world conference each year about the so called “Technological Singularity”, including the advent of Universal artificial intelligence and its effect on humanity in the future.

I can't tell what my colleagues in Bulgaria are doing in the field; me myself, right now I'm warming up – clarifying my own ideas from the past and studying the theories of the others. Afterward, I'll continue with improving and specifying my theory of intelligence. I plan to start to do experiments with simple seed AI. My ambition is to found a research company, like Hawkins and Goertzel, but I don't have partners and capital yet – I'm searching for them.

- What would these experiments look like?

I will create intelligent agents and will watch their development in virtual worlds. Such an agent would have a “brain”, where I'll implement ideas from mine and the others' theories, as well as part of human brain architecture - cortex and old brain. The cortex has several main types of “zones”, functional units – sensory, motor (linked with “will”) and associative (connections/dependencies between different zones). The old brain is responsible for the emotions and the feeling of satisfaction/dissatisfaction of the basic instincts and needs. The agent would have sensors and feelings - vision, hearing, touch, hurt, hunger, pleasure and others, and a virtual body, which will allow it to interact with the virtual reality, to feed itself, to avoid troubles etc. Just after its “birth”, the agent would be controlled entirely by the old brain and would act mostly chaotically, driven only by the basic instincts, such as: pulling out of hot or cold places, attraction to the smell of food. The cortex will constantly watch and record the agent sensory inputs and motor commands and will search for patterns that link them. The cortex' goal is to find the patterns of better satisfaction of its basic needs. If the simple experiment are successful, I will make the virtual worlds and the virtual body more dynamic and will fill them with a higher variety of stimuli and patterns. That is supposed to lead to emergence of a more complex behavior. Eventually the virtual world is supposed to turn to real inputs – from camera, microphones etc.

- Many people believe, that an AI should know everything in order to convince them that it is intelligent. However, raw knowledge is not the most important aspect, isn't it? How do you think the Artificial General intelligence machine would look like, also the Ultimate AI?

The most important capabilities of the artificial general intelligent machine are in the self-improvement, learning and universality. A system, that interacts with people and its environment, and like a baby develops from a helpless state to a mental level of, say, a 2-year old toddler is much closer to my vision of a Thinking Machine, than current robots and specialized, narrow AI tools such as speech recognition, image recognition, search engines etc.

The Ultimate AGI is capable to self-improve even the most basic algorithms of itself and is ever reorganizing itself in order to work better and better, reaching to the ultimate limits. In humans, there's a similar mechanism, called neuroplasticity, which however is declining after the very early years.

- The ethical issues about creation of intelligent machines are a lot. Don't you think we would need to separate machines as “good” or “bad” in the future?

I believe that the thinking machines would be the most similar to us creations that we have ever met so far, because the intelligence is our most special quality - our bodies are not so special. It is true, that machines could do evil things, like in the movies, if they go out of control or fall in the hands of “bad guys”. Unfortunately this is true for all big inventions. Robots would create new and complex cases for the lawyers, as well.

- What would you tell to all the scepticists, who deny that AGI can be ever created?

I wish them good health. Dreamers and adventurers make the great discoveries. The job of the scepticists is to deny, but afterwards not to believe their eyes.

Twenkid Research

0 коментара: