Monday, December 16, 2013

// // Leave a Comment

Демо на Тошко 2.055 алфа - синтезатор на българска реч | Toshko 2.055 Alpha Demo





Повече инфо: http://twenkid.com/software/toshko2/
Read More

Saturday, December 14, 2013

// // Leave a Comment

"Тошко 2" - заявки за продукта | Toshko 2 preorder form (pre-announcement)


Този формуляр още не работи, но ако открадна достатъчно време от останалите 101 посоки, в които работя, може да заработи още през следващата или по-следващата седмица. И стискайте палци да има някакви продажби или направо вие ги предизвикайте!... :)


Повече инфо: http://twenkid.com/software/toshko2/
Read More

Tuesday, December 10, 2013

// // 1 comment

Entropica's Equation of Intelligence - a Discussion and Criticism | "CaaS" Intelligenent OS Operating Systems| Интелигентните операционни системи и Ентропика - коментари

Entropica - F = T delta S ... [Български - виж по-долу]

"higher freedom of action"

http://www.kurzweilai.net/ted-an-equation-for-intelligence-by-alex-wissner-gross
A publication on that topic appeared earlier this year. Now that I thought again about it (thanks to S. for sharing the link to the talk), at first it sounds to me related (connected, a by-effect of) to generalizations such as  ~

Mind [general intelligence, versatile intelligence] aims at predicting the future with ever higher resolution of perception and causality/control, in an ever larger span, covering all possible dimensions (possibilities for expansion and increase of resolution, such as time, space; also subdividing into subspaces, new dimensions).

[Find my "Teenage Theory of Universe and Mind" first published in the early 2000s. See also: http://artificial-mind.blogspot.com/2014/03/whats-going-on-everything-predictions.html ]

Of course, the resolution and span are trade-offs** when possessing fixed cognitive resources. Higher span/area/time may be predicted (controlled, manipulated) with lower absolute resolution than a smaller area. But there's an aim at increasing the resolution, compared to previous states (capabilities), and increasing the cognitive capacity as well.

In one word: progress.

Others from my school of thought have similiar assumptions, in one way or another.

Prediction to higher ranges is achieved by generalization, which is compression - reduction of the computational/memory requirements (in some regard, that's also a trade-off).

Edit: 30/12/2013:

It's achieved also by search and exploration, sometimes without compression. For example some technologies are neither more general, nor more sophisticated, they have to be found by doing research: systematic or random exploration of different possibilities, in order to put a particular one into consideration and check it/verify it. The research could be either/both empirical or/and theoretical. The limited capacity of the focus/working memory/processing (in any sense) requires that a "spotlight" scans the enrivonment - any space, having any kind of coordinates.


  • Some problems can be solved only empirically, by an extensive sequence of probes, because:
    • the internal design is hidden
    • the observable behavior doesn't tell enough
    • the observable behavior tells too much and the observer can't digest it
    • it's a "piecewise" defined function, discontinuous and too complex etc.
    • some emergent models require simulation


  • Sometimes the spotlight may miss to find something that's otherwise adjacent.


  • The more advanced understanding and technologies open new horizons for exploration. The latter couldn't be explored before the preceding technological advances, which help in two directions:
    • "hiding the complexity" accumulated so far, that is
      • reducing the number of possibilities for action, the directions - the number of case of what to do, ignoring a lot of possibilities as useless (rules about what not to do); what to research, what to skip etc. (dead-end technologies, inefficient directions etc.; the best practices, the most optimal approaches with given tools etc. - when found, they are not searched in the free space, but taken as trajectories)
      • now the technology (lower levels of processing) do the job automatically, cognition works in a different domain, using different concepts
    • allow to switch the focus to something else.


  • "Control" (causality, government) in my terms is to be able to predict what's gonna happen, both passively and actively (to cause the future, to be aware of what you are going to do). The increase in the resolution and span allows also the achievement of higher freedom of action.

    E.g.
    Perception Causality Control Span
    4K or 8K video cameras and displays, higher resolution clocks and nano- or femto-second photography and video Miniaturization in electronics, nanotechnologies, operation on single-atoms Telescopes seeing further away; detectors of single elementary particles.

    I also agree that intelligence is trivial, see my discussions on "multi-domain blindness" here and on the AGIRI list. Elementary activities and skills are labeled as "hard" or "difficult" by those, who don't understand them, most people have intrinsic cognitive inability to learn and thus understand many otherwise elementary cognitive skills. In addition, the general limitations of human speed of data otuput make expression of the knowledge (and turning it into a thinking machine) slow. There must be an efficient boot process to allow the machine to speed up the process (something I've been working on, but that's another topic).

     However, the increase of possibilities ("the freedom of future of action") without additional constraints sounds to me overfitted.

    I think it's rather increase of the freedom of actions from particular class of actions, given particular constraints - not all possible actions.  Most actions are useless and some are the reverse of the previous action. It's true that the system should try to keep going (don't stall) if it's supposed to progress... (See also a comment of mine in AGIRI list lately.)

      The decrease of possibilities of action is related to the danger of reaching [non-desirable] statical stability, where the system would "stall" and get blocked.

    That reminds me of the dialectical materialists and their "eternal motion" (dynamics) and view of everything as "a form of motion". Recently somebody sent me out of any context an Engels's cite ~ "the presence of contradiction is a sign of truth, and the lack of contradiction is a sign of confusion". In my opinion what Engels and dialectical materialists meant could be quite different than the superficial literal interpretation.

    A "truth" to them is that "everything is in motion", contradiction is a sign not of "truth", but of the presence of potential for change. If there are no contradictions and the thinker doesn't search for such, then there won't be a potential for change, "no freedom of actions", the system stalled. Of course, if there is no "motion" (dynamics), the state of affairs won't change.

    The process of hapenning of an event is a "change", therefore if there's no "motion" (dynamics, "freedom of actions")), there won't be change and no progress. If there's motion, but no "freedom", i.e. the possibilities are considered to be "too limited", then there will be repetition of what's assumed as "past", "already given", that's "no freedom".
    However, when intelligence reaches to states which work "right", support given prediction and causality control at given level of abstraction, resolution, field or whatever, these states do freeze, as long as they still work (better than other possible for the systems actions)*. 
    [Edit+: 11/12/2013, 1:26 In fact many intellectual goals require finding one single right solution and eliminating the freedom of possibilities down to only one, given particular resolution limitations, such as given correct set of physical or chemical laws for the available observations. Further discoveries improve the resolution and span of application, but at the lower resolution and in the old settings the old theories are still right, otherwise they wouldn't have become valid in their time.
    The standard example - Newtonian mechanics is precise for "slow" speeds and low masses, and in fact the time is practically absolute, it appears so, given the resolution is low enough, compared to the speed. People believed that it was absolute, because the observations suggested so. ] Also, one part of human and animal behavior which impacts our intelligence is one that reaches to, I call it, "physical rewards". 
    The goals of the "physical", or maybe a better term is "lower emotional" reward system are fixed, they are given "pleasures" and the system going after such rewars doesn't aim at progress. It finds a place of pleasure, it stays there and the static stability in this domain is fine for it, it's even desirable. 
    A system driven by such rewards go to explore the world only in order to find coordinates with static stabilit. From plenty of food for an animal, to secure job for the general population. "Secure job" can be regarded as higher freedom of action (steady income, you can do things using social resources), however it's lower freedom of other actions, such as - conformism and being obedient from so and so in the morning to so and so every single day, being fixed at particular position, etc. 
    Also, that "freedom" of action, if the actions are generalized, is not more: what most people do with their money is buy things. If an ordinary man was given 1 million dollars, he wouldn't become smarter, start playing the piano, directing movies or designing new computer systems (first studying engineering, mathematics...). 
    He would just go and buy an expensive car, a new house; he may also go to the Bahamas. Cognitively it's not more freedom, unless the variety of input (the views of Bahamas, more people will get interested in his person etc...), the lenght of the trajectory that he can cross (can afford a lot of travelling), the amount of goods that one can buy (and can cause others to serve him) etc. are considered as the amount of "freedom of action". It is an increase of the freedom of action, but it's not an increase of the cognitive freedom of action, it's not a cognitive progress. 
    Dopamine levels and Novelty seeking It is related to the so-called reward system in the brain, and the theoretical "lower emotional/physical" reward system in my theory. The release of dopamine conditions, associates certain states of mind at the moment of release to "good", "pleasure", "desirable" and the agent starts aiming at those states. 
    Playing computer games, hazard games, (part of) sexual activities. Dopamine also promotes the so called novelty seeking, getting quickly bored, and in clinically significant extents it reaches manic disorder. Excessive novelty seeking is related also to Attention Defficit Disorder, ADD, and the aim at "increasing of the amount of freedom of action", if taken alone as only goal, ruins the lives of the ones who suffer from this condition. 
    In order to have steady cognitive progress, there must be generalization and stabilisation at certain points which decrease the freedom of actions and simplify life.
    In more concrete terms from the Entropica - for example, in the examples of clips with the balls and the balance keeping, the amount what's maximized is the possibilities of the actuators to impact in predictable (intentional, rational) way the target entities.

     In case the object fall down and the agent is not capable to pick it up, in this particular setting it will reach to a static state - the poor agent won't have anything more to do. Also, if it starts from a state where the object is on the balancing bar, and all that this agent could do is "balancing", moving right-left etc. - the freedom here is the amount of force that it gives, determining whether the ball would fall or stay.

    If the agent is able to do many iterations in genethic algorithms matter, it will discover that if doesn't balance it right, the ball will fall down, stop and it won't "feel" it anymore, so it won't have anything more to do. If it's aim is to do something, and it has to have "a ball on the bar" etc. in order to do, the next time the agent will try to keep the ball up, in order not to "lose its job" and get "bored".

    However, if the agent is capable to pick the ball, it could throw it up and start balancing again (more freedoms), but if it could do this, throwing it around (like in the game of Arkanoid for example) looks like more "freedom of action" - the space of the coordinates of the trajectory of the ball would be bigger, and it won't matter what particular actions the balancing agent has choosen - it would be enough to kick it around and that would rather turn into "random", "lacking order", "lacking rationality"... (While the "rationality" would be - kick the ball around, that's fun, and it is rational if that's what the agent wants to accomplish.)

     As is shown in the examples - there are always constraints, it's not just "more freedom". The big ball/disk that pushes a small ball to the other one etc. in a limited space. If there are no constraints, randomness is the "biggest freedom". The freedom of intelligence and in creativity requires constraints. 

    Without constraints - randomness is sufficiently "inventive". However in the same time it may seem not really "free", because there's no "intentional" (non-observable, "hidden variables") part. (Well, some particles seem to posses "hidden" variables to the observers, i.e. been unpredictable from the possible observations only, regarding quantum mechanics.) The author Alex Wissner-Gross' cite of Feynmann in the end of the talk, regarding the most basic physical laws -- the objects close to each other repell, and the ones far from each other - attract. In my opinion the above is also related to the dialectical materialists view of "motion".

     Right - in order to have a never-ending progress, there should be some basic forces that cause cyclic or iterative changes and adjustments. If the world was fixed, it wouldn't have evolved. (That's true, but it's sort of obvious.)

    Also as demonstrated above, the "intelligent", evolved parts actually do "freeze" and keep stable, such as living cells or scientifical knowledge, as long as they work better and are "stronger", have a bigger impact, than the more "free" version.

    * * See Vladimir Turchin's work on the so called "Metasystem Transition" and Boris Kazachenko's "Meta-evolution". ** Edit: 11-12-2013, 3:49 - In terms of phsysics, the trade-off "precision vs scope" (see B. Kazachenko), big span with low detail vs small span with high detail seems related to the trade-off in quantum uncertainty. http://en.wikipedia.org/wiki/Uncertainty_principle And to all kinds of trade-offs...

     Regarding the "CaaS" - Cognition as a Service -  http://gigaom.com/2013/12/07/why-cognition-as-a-service-is-the-next-operating-system-battlefield/

    Yeah... that's one of my research goals, too, I published these intents in early 2008... http://artificial-mind.blogspot.com/2008/04/research-directions-and-goals-feb-2008.html

     I'm working actively on my intelligent low-level infrastructure and the research accelerator, amongst with hundreds of theoretical and practical ideas that I collect and stack for implementation, but there's a lot of "pre-intelligent" hard work. It partially starts to speed-up some daily activities. It's a hard life doing herculean feats on your own, but the warrior is a warrior even if he's alone, it's a time for energetic work, once I have that infrastructure, it will speed up the rest like a rocket. Details - later.

     Ентропика и интелигентни операционни системи и - коментари (по-кратко от версията на английски) Примерите от клипчето с топките и с балансирането - увеличава се (запазва) се възможността на актуатора да въздейства по предсказан от него начин върху целевите елементи. Ако предметът падне и агентът не може да го вдигне, след това ще стигне до статично положение и няма да има какво да прави.

    Но ако може да го вдигне, няма такова ограничение, тогава като падне, ако няма какво друго да прави, просто ще го повдигне и ще си го подхвърли пак да балансира например. Увеличаване на възможностите има смисъл ако е при дадени ограничения и има целенасочено въздействие, иначе най-много "възможности" имат частици, които се реят в пространството. Но те нямат "възможност", нямат нелинеен вътрешни целеви модул, и всъщност са ограничени. http://gigaom.com/2013/12/07/why-cognition-as-a-service-is-the-next-operating-system-battlefield/ За първия линк, така е, виж надолу "Интелигентна операционна система" http://artificial-mind.blogspot.com/2008/04/research-directions-and-goals-feb-2008.html

    Статията за Ентропика излезе по-рано през тази година.

    Сега като се замисля, не е толкова далеч от моите обобщения от рода на:

    Умът се стреми да предсказва бъдещето (си) с все по-голяма разделителна способност на възприятието и управлението, във все по-голямо обхват - времеви и пространствен.

    "Контролът" (управлението) в мои термини е именно да можеш да предсказваш какво ще се случи, пасивно или активно (да го причиняваш), и увеличаващата се разделителна способност са тези "повече възможности".

    Също това, че универсалният разум е супер елементарен също го твърдя отдавна: хората са глупави и ограничени (multi-domain blind), всички сфери на познанието и изкуствата са супер елементарни.

    увеличаването на възможностите за действие обаче е част от картинката, звучи ми като свръхобобщение. Може да се каже увеличаване на възможностите за [въздейстие] от определен клас, а не на всички. А намаляването на възможностите всъщност е свързано с опасност от стигане до статично равновесие, при което системата "блокира". Напомня на диалектическите материалисти и тяхното "вечно движение" (динамика). Естествено, ако няма "движение" (динамика), нещата няма да се променят, случването на неща е "промяна", следователно ако няма "движение" (динамика, възможности), няма да има "промяна" и няма да им развитие.


    Примерите от клипчето с топките и с балансирането - увеличава се (запазва) се възможността на актуатора да въздейства по предсказан от него начин върху целевите елементи. Ако предметът падне и агентът не може да го вдигне, след това ще стигне до статично положение и няма да има какво да прави. Но ако може да го вдигне, няма такова ограничение, тогава като падне, ако няма какво друго да прави, просто ще го повдигне и ще си го подхвърли пак да балансира например.

    Увеличаване на възможностите има смисъл ако е при дадени ограничения и има целенасочено въздействие, иначе най-много "възможности" имат частици, които се реят в пространството. Но те нямат "възможност", нямат нелинеен вътрешни целеви модул, и всъщност са ограничени.

    //Благодаря на С. за линковете.
    Read More