Just another Network site

prosthetic knowledge — Artificial Intelligence Uses Less Than Two Minutes…

Research from @gvucenter uses Machine Learning to clone game design from gameplay footage

  • This story is already doing the rounds but is still very interesting – Machine Learning research from Georgia Tech manages to clone game design from a video recording.
  • The top GIF is the reconstructed clone, the bottom gif is from the video recording:

    Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.

  • Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.
  • To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal.
  • The same team first used AI and Mario Bros. gameplay video to create unique game level designs.

Artificial Intelligence Uses Less Than Two Minutes…

@prostheticknowl: Research from @gvucenter uses Machine Learning to clone game design from gameplay footage

n. Information that a person does not know, but can access as needed using technology

This story is already doing the rounds but is still very interesting – Machine Learning research from Georgia Tech manages to clone game design from a video recording.

The top GIF is the reconstructed clone, the bottom gif is from the video recording:

Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.

Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.

To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal. This made “the training problem for the AI as difficult as possible.”

Their current work uses Super Mario Bros. and they’ve started replicating the experiments with Mega Man and Sonic the Hedgehog as well. The same team first used AI and Mario Bros. gameplay video to create unique game level designs.

prosthetic knowledge — Artificial Intelligence Uses Less Than Two Minutes…

Comments are closed, but trackbacks and pingbacks are open.