The AI Gaming Revolution
Articles Blog

The AI Gaming Revolution

August 27, 2019


The machines have won. Computers can now defeat humans at pretty
much any game we’ve ever invented. And all because of some clever tricks we’ve
come up with for programming artificial intelligence, or AI. The simplest definition of AI is a computer
program designed to solve a problem. Most programs, including probably all of the
ones letting you watch this video right now, don’t solve problems. Instead, they execute instructions, that they
were given by human programmers. They don’t try to come up with their own
solutions for what they should do to accomplish a task. AIs do try to come up with their own solutions. The smarter an AI is, the more complicated
the problem it can solve. Since the dawn of computer programming, we’ve
been teaching AIs how to play games. Things like checkers, and chess, and recently,
the Chinese board game Go. We do that because games are a great way to
measure how smart an AI actually is. Playing, and winning, a game requires problem
solving. And the ability to solve problems is one of
the benchmarks of intelligence. It helps that the problems are very clearly
defined, for both the human audience and the computer program. There are no ambiguous results: either the
AI can play checkers, or it can’t. This makes games the perfect lab environment
for creating new kinds of AI, which is why the history of AI is often the history of
AIs playing games. The first game an AI ever played and won against
a human opponent was a checkers program, written in the 1950s by American computer scientist
Arthur Samuel for the IBM 704 computer. This was a machine that you had to program
by feeding magnetic tape into a big drum. Checkers is a simple game. But the IBM 704 was a pretty simple machine. It couldn’t run the outcome of every possible
move it could make by trial and error in order to find the best one. At least, not in a reasonable amount of time. If it could, that would be solving the problem
of winning a game of checkers with brute force. The brute force approach involves crunching
a lot of numbers: the computer plays out every possible game that could take place after
every possible move, then picks the move with the highest probability of leading to a win. That’s not very creative, but it’s definitely
a valid way to solve the problem. And we’ll come back to it in a few minutes. The problem is, the brute force approach uses
a lot of computing resources to run all those numbers. Those resources just weren’t available back
in the 1950s. So the first game-playing AI was made possible
thanks to something called heuristics. And every AI since then has used them. A heuristic is basically a rule of thumb. It may not always be exactly right … but
it’s almost always mostly right. In computer science, a heuristic is an algorithm
that limits brute force searching by selecting solutions that may not be the best … but
are good enough. So a checkers algorithm might say: okay, you
found a move that lets you capture an opponent’s piece. You can stop now! Just go with that move. Simple heuristical programming like that was
enough to conquer checkers. The next game an AI faced was poker. In the 1970s, computer scientist Donald Waterman
wrote a program that could play draw poker — the kind where you’re dealt five cards
and can replace some of them, typically up to three cards. He did it by developing something called a
production system, another thing that you’ll now find in AIs everywhere. Production systems use pre-programmed rules
to categorize symbols, like those on a card. Waterman’s system sorted cards as more or
less valuable depending on what other cards it already had in its hand. Like, on its own, a four of clubs isn’t
much to write home about, but it has a lot more value if you already have the four of
diamonds and the four of spades. The system could then calculate how good its
hand was, and whether or not it should stay in or fold out, by comparing the value of
its hand to its preprogrammed measures of what a ‘good’ or ‘bad’ hand was. Heuristics and production systems. Algorithms that apply rules of thumb, and
programs that apply complex and comparative systems of rules. Put those together, and creating AIs that
could play basic board games became a walk in the park. Chess is not a basic board game, though. It’s a grown-up board game, and winning
it was going to take some grown-up technology. The first chess machines were built in the
1980s, at Carnegie Mellon University. The most successful of these early machines
was Deep Thought, which could calculate the outcomes of 700,000 moves per second. And Deep Thought actually managed to defeat
a chess grandmaster, in 1988. But there’s a big difference between just
some grandmaster, and the greatest chess player in the world. That man, in the 80s, 90s, and actually still
today, is Garry Kasparov. Deep Thought was not on Kasparov’s level. Beating Kasparov meant getting stronger, and
faster. Like, a lot. Upgrading Deep Thought involved a few improvements. Number one, more memory and more multiprocessors. That’s raw computing power. Deep Blue, Deep Thought’s successor, was
simply a more powerful machine. Number two, better software. When you’re dealing with millions of search
results, all being compared to each other, slowdown is a big problem. So Deep Blue’s software was streamlined for
parallel processing. It was also taught to factor in some of the
more nuanced measures of a favorable chess position. Better heuristics, in other words. The search speed for the first version of
Deep Blue was about 50 to 100 million chess positions per second. And when it was matched up against Garry Kasparov
… it lost. Pretty decisively, with two wins against Kasparov’s
four. Calculating the outcomes of 100 million chess
positions per second was not enough to beat the human world champion of chess. So team Deep Blue more than doubled the number
of chips in the system, and improved the software so that each chip was about 25% more effective. The version of Deep Blue that played a rematch
tournament against Kasparov in 1997 could calculate over 300 million chess positions
per second. And then it won. Deep Blue was an incredible feat of computer
programming. When it defeated Garry Kasparov, it was the
most complex AI in the world. But it mostly won through brute force. It crunched the numbers for every possible
move it or its opponent could make, then picked the one most likely to lead to victory. If it didn’t win, its programmers upgraded
it so it could crunch even more numbers. That approach was not going to work with the
game of Go. We talked about Go here on SciShow when Google’s
AlphaGo program defeated world Go champion Lee Sedol, in March of 2016. But let’s go over the factors that made creating
a Go program such a daunting task. If you grew up in the West, you might not
be familiar with Go. It’s a Chinese game that’s existed unchanged
for thousands of years. It’s sometimes described as ‘Eastern chess,’
but it’s much more complicated than chess, especially for a computer. First of all, a Go board is bigger than a
chess board. Go is played on a 19×19 grid, while a chess
board is 8×8. But that’s underselling the complexity of
Go, because you don’t play the stones — ‘stones’ are what the pieces are called in Go — inside
the grid tiles. You play them on the corners. Meaning that each tile actually represents
four possible positions, which may or may not be shared with more of the surrounding
tiles. Bottom line: there are more possible board
configurations in a game of Go than there are atoms in the universe. Secondly, no Go stone is inherently more valuable
than any other stone on the board. This is different from chess, where a queen,
for example, is much more valuable than a pawn. That kind of relationship is something you
can program an AI to understand. You can feed that into a production system. But a stone in Go gets its value from its
position on the board relative to the positions of all the other stones on the board. The objective in Go is to use your stones
to surround more territory than your opponent. So the value of any one move is often subjective. Even high level Go players sometimes have
a hard time explaining how they know a good move from a bad one. And you know what computers are really bad
at? Being subjective. Also, calculating positions that run into
the trillions of trillions. The Deep Blue brute force approach was not
going to get anywhere with Go. So AlphaGo is not a brute force program. It uses deep neural networks: the same kind
of technology that’s used by facial recognition software. Instead of calculating stone positions piece
by piece, it looks for patterns on the board. Just like facial recognition programs will
search an image for things that might be eyes, noses, and mouths, AlphaGo looks for patterns
of stones that might offer strong or weak tactical opportunities. But how does it know what makes something
a strong or weak opportunity? I mean, we said the value of any specific
position is subjective, right? Here’s where you need to know how deep neural
networks work. A deep neural network is made up of layers
of different computer systems, called ‘neurons,’ all stacked up on top of each other and running
in parallel. This allows the network to analyse the same
problem from multiple different angles at the same time. Each layer judges the same image by different
criteria. So, one layer will look at the picture of
the Go board and pick out all of the legal moves. The next layer might examine the board for
areas that aren’t under anyone’s control yet. The layer beneath that could be keeping track
of how long it’s been since either player has made a move in any particular region of
the board. That tells the program which areas are currently
being fought over, and which ones might be safe to ignore for a while. The layer beneath /that/ might be comparing
the patterns of white and black stones to its internal database, to see if anything
happening on the board looks like anything it’s seen before. And so on, and so on. AlphaGo has 48 layers of neurons, each with
its own way of evaluating the board. Those layers send information up and down
the stack to all of the other layers. So if one layer turns up something really
promising, the other layers will all focus on that area of the board. When all of the layers agree on a move as
meeting their own criteria for what makes a ‘good move,’ AlphaGo plays a stone. By using a deep neural network in this way,
the program can actually mimic human intuition and creativity. AlphaGo defeated Lee Sedol 4 games to 1. Sedol is the Garry Kasparov of the Go world. And AlphaGo is only going to get smarter. So when it comes to game-playing AIs … there
aren’t really any challenges left anymore. Go was it: the most complex board game people
have ever devised. Though I kind of would like to see it play
Arkham Horror. But we made AlphaGo. We made Deep Blue. These programs are manifestations of our intelligence
and curiosity. And if we can create an AI that can beat Lee
Sedol at the most complicated and intuitive game ever played, then who knows what else
we can do. Thanks for watching this episode of SciShow,
which was brought to you by our patrons on Patreon. If you want to help support this show, you
can go to patreon.com/scishow. And if you just want to keep getting smarter
with us, you can go to youtube.com/scishow and subscribe!

Only registered users can comment.

  1. AI winning 1vs1 in Dota 2 vs the best …. is also extremly impressing. Then we would sure see top AI in other games. Of course we need to fear a computer AI that is programmed to think by itself and have to Power to adapt to anything…. but for now… lets hope the AI development will give us better games!

  2. it's funny how computers are becoming good at things most of us can't do like beating world chess grandmasters, yet are so clumsy at things pretty much most of us can do like runinng and walking.

  3. Brute force does not use probability.
    Tho I am somewhat impressed, unlike in most videos the narrator gets the definition of heuristic correct. Much of what he says in the video however is incorrect.

  4. Wat is more funny than indians speaking iiiiiiinglish eventhough they try their best,forget about the rest first….ha….(advertisment).

  5. You state: "The first chess machines were built in the 1980s, at Carnegie Mellon University."

    Unless you're using some deeply idiosyncratic meaning for "chess machine", this is simply untrue. The first real chess program on a computer was running by 1957 and by the '70s consumer chess machines were available at Sears (and elsewhere, of course). Computer chess was moderately mature by the 1980s, not in its infancy.

  6. My mate dave never lost to a computer in chess. so i think the software still needs a lot of work

  7. AI in games have been one of the least developed parts for ages. Modern gamers whine like babies so they make gameplay playable by babies.

  8. Starcraft is far more complicated than chess or go.

    First of all, it’s mostly a game of logistics and production trade offs. Rather than conquering the board the objective is two fold; produce a lot of good stuff, secure sources of revenue and mess up the opponent’s production capability.

    Second, the fundamental decision in Starcraft (”should I attack?”) reauires you to forecast an almost chaotic system. The number of units your opponent will have as a function of time changes as a function of the number of units you have, and vice verse. Thos results in a dynamic system with feedback which means that forecasting a battle is comparable to forecasting the weather or stock prices because not only do you have to mind the strength, numbers and positions of the units you also have to factor in some randomness (weapon cooldown for example). The only good thing is that computers don’t produce irrational numbers.

    And third, the number of possible board configurations of go looks impressive but is pretty cute when compared to Starcraft. Starcraft is played on 255*255 grid and you make multiple moves at the same time (not only during micro management but also during production, especially with multiple bases). You can’t use the same techniques for go and chess to play Starcraft, Deep Mind has worked on some minigames but actually beating a professional human it is not even close to.

    I’m optimistic and think Facebook (who recently hired two of the best Starvraft AI programmers and researchers in the world) will give Deep Mind a run for its money and that one of them will beat professional humans within five years. I also think genetic algorithms will play a huge role.

  9. 'Go' sounds more like a splice of Reversi/Othello and checkers, not chess; but I'm a bit ignorant since i've never heard of 'go'.

  10. Actually A.I. cannot beat us in any game. Let me give you an example. I bet I could beat the AlphaGo setup in a game of ches or backgammon.
    These are incredible but highly specialized machine setups.

  11. Maybe it is possible, allready now or in the future, to let AI invent a much more difficult game then Go, and then let two different AI s play against another. Or against humans.

  12. Hello! I'm very interesting about where did you get the information of the layers' description that alphago uses to analysis the game from multiples angles. Can you give me a clue? I already checked your listed sources

  13. The is a game more complicated then go. It is called “World in flames” Check it out, no chance a computer can beat human!

  14. I think it should be noted that while AI can win at go, it can’t win at every game. Mostly because some games are really hard to observe and think about like GTA 5

  15. "There are more combinations for a GO board than there are Atoms in the universe" That sentence made me lose respect in your channel's fact checking skills.

  16. I'll be impressed when an AI can beat a story driven open world adventure game with no predefined info provided to it by the programmer. Only the screen input to guide it.

    That would prove to be an interesting challenge because the goals are generally extrapolated from text or audio. What defines "better" is very hard to extrapolate in those games without having a lot of prior knowledge.

  17. If our creation can outsmart us then what happens? How about when their finally smart enough to be concious about whats really happening?

  18. AI is only good at games with complete information like chess. anything that requires intuition and improvisation it fails… it's not good at languages, music or art creation…

  19. Wait. Waitwaitwait. The chess-playing machine was made in the '80s…and called DEEP THOUGHT?! HAHA I love it. Proves that even the most professional scientists are still pop-culture-meming nerds!
    (also 42, by the way.)

  20. I was sure Hank was going to say that AlphaGo was something we made and look at it, imagine what an AI would make to accomplish something.

  21. I watched a documentary on the go AI and ended questioning my existence after lee lost pretty badly. A.I still won't be able to win I'm starcraft though

  22. It would be incredible if you could update this with Elon Musk's AI use in DOTA 1v1 and now 5v5! Thank you!

  23. There's still plenty of room to improve AI. Lets see an AI that can learn to be as good as Lee Sedol at Go after the AI only simulates or analyzes the same number of games that Lee Sedol has had the chance to play and analyze in his lifetime.

  24. The explanation of a deep neural network here is very bad. It's not that each level checks for a specific feature. Each layer checks for something abstract that can't really be understood by humans. With AlphaZero for instance, there is zero human intervention to tell each layer what to check for. During training, the decision of what to check for is randomly made, then mitigated or reinforced through whether that game was won or not. It's hard to explain, but the idea that each layer is some neat and easy to explain feature of the board is dead wrong

  25. How come there is no mention of Gerald Tesauro's TD-Gammon Backgammon AI player based on Neural-Net temporal difference learning? I doubt that SciShow is not aware of the importance of TD-Gammon in the revolution of AI gaming and am puzzled by this omission.

  26. The only real problem with Deep Thought turned out to be that its games generally last seven and a half million years, and if you want a rematch, you have to build an even bigger computer.

  27. RTS will be easy playground for AI but the really good test
    for them will be Turn-Based Strategy games ex Civilization game it TBS like
    chess but with much much more complexly ( I called the modern day chess hehe ) for
    CIV 6 for example AI will need a huge amount of additional resources to beat regular
    human and till now there is no chance for any AI to beat an expert human with
    the same resources the game is too complex ( like life ) that AI is almost 0%
    chance with the same resources even if AI will calculate Centillion moves per
    second.Agree ???

  28. Create an AI that creates an AI that learns and beats Go and Chess.

    All what these AIs did was very narrow. The might beat Chess and Go, but the same algorithm couldn't invent a game like Go, build a boat or solve the financial problems of a company, build an instrument ect, at least not without human interaction that is.

    That's why thes all no true AIs in human terms of intelligence. Humans can figure other things out, educate themself develop traning programs, be creative on any given field and so on.

  29. Update: Deepmind's alphastar defeated the best starcraft 2 players 5-0. Only lost 1 in an exhibition game where the a.i. had to control the camera.

  30. The alien in Alien: Isolation is actually a handicapped AI that for real is trying to find and kill you. It gets clues and has rules so that it often has an idea of which area you're at but it doesn't have enough clues to figure out where you are and so it searches around.

  31. Could you guys cover Jeremy England's work. I don't know the symbols he uses so I don't understand the math. I think he really has earned a place on this show

  32. HTML a.k.a. webpages are using a programming language that isn't strict, meaning that the program reading this "mumbo jumbo" has to be smart about handling things, it can't learn so it's noit a true A.I. – it's just making assumptions in as a fixed "state machine" – it will make predictions and render the webpage for you, even if it wasn't necessarily written correctly. Think of this like how people sound when they speak a different language; it's probably not perfect, but usually they can get the message through.

Leave a Reply

Your email address will not be published. Required fields are marked *