Monday, April 17, 2023

// // Leave a Comment

The hardware and resources inequality in AI/AGI: an old story now rediscovered by the worried mainstream — a 2013 & 2009 articles vs 2023 paper

I start to publish in Medium as well - I had to to long ago, as it has a community and "social life", but:  better later than never. I may republish some of the articles here there in order to hopefully extend the appropriate audience reach.

Editing The hardware and resources inequality in AI/AGI: an old story now rediscovered by the worried… – Medium

The hardware and resources inequality in AI/AGI: an old story now rediscovered by the worried mainstream — a 2013 & 2009 articles vs 2023 paper

“Montreal.AI: 23 ч. · Choose Your Weapon: Survival Strategies for Depressed AI Academics Julian Togelius, Georgios N. Yannakakis : https://arxiv.org/abs/2304.06035
#ArtificialIntelligence #DeepLearning #MachineLearning

While it is true that even 8 years or 10 years ago even regular programmers could have the GPU power ( the well-paid and owning their time on the right target; but usually the ones who make money lack the vision and they buy GPUs/hardware for games, and the ones who had vision and intelligence had no money), the “inequality of opportunities” is of course not a new phenomenon, including in AI. I’ve written about it in 2013 and it was valid for the pioneer AGI researchers one of which was I, publshing substantial works since 2001, aged 17, and being author of the world’s first University courses in AGI in 2010, 2011 with theories and a course program that still stand and are only confirmed and elaborated by more and more researchers and publications. The inequality phenomenon was valid for the AGI researchers versus both the well-"fed" well-funded high-profile and famous academics who “rolled their eyes” when they heard about AGI (ask Hassabis, Legg; and Altman even about 2010-early 2010s in MIT, Altman refers to 2015 when they found OpenAI). It was vlaid versus any researchers from the Academia (with students working for them, “free” laboratories etc.), and of course: the industry.

https://artificial-mind.blogspot.com/2013/08/issues-on-agiri-agi-email-list-and-agi.html

A part of the conclusion of this work:


“… — WORKABLE THEORIES and IMPLEMENTATIONS


Some people try to work on workable theories and implementations, but this list is a home of the poorest and the most lonely ones in the AGI community, even though some of them were some of the pioneers of the new wave of that community, long before the “institutionalized” researchers took it as “prestigious”.


The list’s researchers poorness impedes their opportunities/motivation for concentrated work/producing academic-style materials — many believe the mainstream academic system (including many aspects of the peer-reviewed journals etc.) has intrinsic corruptions and have left it for “political” reasons.

Moreover, even if they do know how or have potential to develop working machines, this is a big effort that may take a lot of time before they could have a complete system — coded and running. If they haven’t produced visible results already, that doesn’t imply they wouldn’t do after years of collection of critical mass, as long as they could work.
Besides they are supposed to be 10, 100 or 1000 times more capable than the normally funded and organized ones from the academic/industrial competition. Current ones can’t afford visiting appropriate conferences or travel around research centers and are alienated.
They should have much broader knowledge and skills, acquire new knowledge and skills in a shorter time and work much faster, because:
 — they can’t afford truly focussed work — too much other troubles, too much sub-problems they should solve alone, a lot of wasted time in attempts to find partners or develop some “booster-funding” technologies, plenty of frustration due to the isolation and helplessness against all the problems [including the dumb financial etc. ones] they have to solve [implement] alone (or give up)
 — they do not have students, partners or “slaves” to give the dirty job to [or barely have, but it’s hard to motivate anyone without funding]

Overall, they should shoot 100 or 1000 targets with one bullet, or they “die out” [in the race]
Welcome to the list of the losers… :))
However some of these “losers”, due to the extreme requirements they face, may really be 50 or 100 times more productive or knowledgeable and non-conventional than the “ordinary” funded and supported competition, and may have guts and balls that the others lack.

Otherwise they should have given up, be part of the existing institutes — “institutionalized” — or from the “AI”. But they are not from those institutes, because when they proclaimed that “AI was wrong” they were outsiders already, heading towards new directions.

Furthermore, those brave ones are supposed to believe and find a way to make thinking machine possible on cheap, old and slow hardware, otherwise they should have another reason to give up to the supercomputer owners and the rich institutionalized researchers…”

The same about NLP:

“What’s wrong with NLP? Part 2”, 3/2009

https://artificial-mind.blogspot.com/2009/03/whats-wrong-with-natural-language.html


One other option for the academics, who are pretty wealthy but complain about that OpenAI, DeepMind etc. are wealthier:


Invent somethign that’s really innovative, different and more efficient. Everybody prefers to just spill in more hardware, make a little change and engrave her name for “new contributions” (what about the credit for the hardware designers and producers?), it was similar in 2000s with NLP: change one bit of some algorithm, produce an increase of 0.1% of some measure/bechnmark and there you are: “a new NLP model”, “moving the SOTA”. Why not building a new paradigm from the ground up. But yes, you can’t, because the dafault is that if you try, you won’t be accepted until you beat the competition and as explained above, in order to do and be accepted, you have to be 1000 times more efficient than them while working on your own with no resources. :)

0 коментара: