The Orion's Arm Universe Project Forums





Google AI Crushes Starcraft
#1
All hail our future AI overlords! (Figure I might as well start practicing my groveling earlyTongue)

LINK

What I think is most interesting about this is the speed with which the AI was able to learn (centuries of practice in weeks) and starting out with multiple AIs and picking out the best at the end.

Both of these are things humans have no ability to do at all and point toward how different AI might turn out to be, both on general principles and how we might come to think about making use of it, even at a (in OA terms) sub-turing level.

Todd
Reply
#2
It's interesting, how many people are now arguing about the "unfairness" of the match, because AlphaStar could instantly focus its attention on any part of the map, it has visited before. In the last game they put a restriction on it that forced it to use the camera-view in the same way as a human would and it lost. Now many people seem to think that this loss happened due to this restriction. However I think that it lost, because the agent, they used for this game, simply wasn't trained enough and thus succumbed to some sort of "trick-play" from its human opponent and not due to the camera restriction. At least that's the feeling, I got, when I watched some analyses by different commentators on youtube.

Makes me wonder, what Vernor Vinge would have to say about all this. He's the one, who said in 1993 that "30 years from now" AI would outperform humans in all intellectual areas. But maybe humans don't even need an AGI in order to create a "good world"? Perhaps sub-turing AIs based on AlphaFold or AlphaStar will be enough to find cures for all diseases, including aging? It's an open question.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Reply
#3
(01-27-2019, 07:31 AM)chris0033547 Wrote: It's interesting, how many people are now arguing about the "unfairness" of the match, because AlphaStar could instantly focus its attention on any part of the map, it has visited before. In the last game they put a restriction on it that forced it to use the camera-view in the same way as a human would and it lost. Now many people seem to think that this loss happened due to this restriction. However I think that it lost, because the agent, they used for this game, simply wasn't trained enough and thus succumbed to some sort of "trick-play" from its human opponent and not due to the camera restriction. At least that's the feeling, I got, when I watched some analyses by different commentators on youtube.

While I can see the advantage of being able to instantly focus anywhere on the map or see everywhere at once or whatever it was doing, I'm not sure that 'unfair' really applies when we consider the larger picture. In the real world, machines are often equipped with abilities that humans lack for reasons ranging from strength of materials to speed of processing to shape. While an AI designed to (for example) play gamers for their enjoyment or to train them might logically be limited in what it could do to human levels of performance - an AI designed to do the best job possible (rather than the best a human could do) would be created to take advantage of everything it could possibly do (within some limits imposed by cost or practicality or what has been thought of, perhaps).

That said, I agree that the lack of training for the AI was probably a factor and something that a few more weeks of training could almost certainly resolve. That those few weeks would result in centuries of increased experience is still just as mind blowing.

(01-27-2019, 07:31 AM)chris0033547 Wrote: Makes me wonder, what Vernor Vinge would have to say about all this. He's the one, who said in 1993 that "30 years from now" AI would outperform humans in all intellectual areas. But maybe humans don't even need an AGI in order to create a "good world"? Perhaps sub-turing AIs based on AlphaFold or AlphaStar will be enough to find cures for all diseases, including aging? It's an open question.

I suspect Vinge would say that this is a case in point. While some tasks are certainly harder than others, as time goes by it seems that we are finding that more and more tasks that we used to consider solely the purview of humans can be done by AI. I suspect that for many areas we are getting close to the point where the limitations on any given AI area of ability will have more to do with the amount of time and effort that humans have put into teaching the AI than an inherent limitation in the AI itself.

As far as the need (or not) for AGI - I think this is an excellent point. Most considerations of this sort of thing (SF or otherwise) seem to consider either 'mundane' uses of AI like self-driving cars or 'robot servers' (usually w/o really putting much thought into just how profound an impact(s) such tech would actually cause) or superhuman instances (up to and including the transapients of OA). But very few really consider the vast middle ground of tasks that might be done with just specialized AI or with AI of less than human intelligence. Put another way, if our best AIs are no smarter than a bug (and maybe less so), than what happens when we can make AIs as smart as a smart breed of dog? Or a chimpanzee? Not for doing the things dogs and chimps do - but for doing things we want done that require a degree of intelligence, but not full human sophonce.

While OA makes some mention of this (vots for example) it's not an area we've really explored in depth, particularly in terms of the early timeline and more specialized sub-turing level minds. Lots and lots of potential 'crunchy goodness' to explore here if anyone is so inclined, I suspect.

Todd
Reply
#4
I highly recommend this video on the subject and the channel in general if you're into AI:



(01-28-2019, 01:01 PM)Drashner1 Wrote: While I can see the advantage of being able to instantly focus anywhere on the map or see everywhere at once or whatever it was doing, I'm not sure that 'unfair' really applies when we consider the larger picture. In the real world, machines are often equipped with abilities that humans lack for reasons ranging from strength of materials to speed of processing to shape. While an AI designed to (for example) play gamers for their enjoyment or to train them might logically be limited in what it could do to human levels of performance - an AI designed to do the best job possible (rather than the best a human could do) would be created to take advantage of everything it could possibly do (within some limits imposed by cost or practicality or what has been thought of, perhaps).

That said, I agree that the lack of training for the AI was probably a factor and something that a few more weeks of training could almost certainly resolve. That those few weeks would result in centuries of increased experience is still just as mind blowing.

It's certainly true that machines aren't going to have similar limits, but in this case there's a very, very cool reason why the designers want to keep the AIs practical abilities in line with humans. The training of a neural network is akin to evolution; random modification to neural weightings are made in a number of similar networks, the networks are tested, the best go on to form the next generation. In this case they don't just want a network that can beat humans because if that was the only selective factor the AI would make 1000 clicks per second and win by brute force. Not very useful for most other practices. But by limiting the AI's practical abilities a strong selective pressure for strategic skill is created. If the AI can't rely on "strength" it will have to get smarter.

That's basically what has happened in this case, as the video above shows things like how many actions per minute the AI is allowed to make have been limited making the AI smarter with its clicks. The point of view thing is something they seem like they're going to try and train around. There was an interesting AMA on reddit with the designers and this came up, one practical advantage it gave the AI was that it could instantly hop between three different fronts to micro-manage its forces. Which isn't necessarily very smart, it's just very "strong". I'm sure they'll already be investigating training methods to get the AI to navigate its point of view like a human, forcing it to be smarter in its strategy.
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)