New AI Generates Videogame Graphics In Realtime

New AI Generates Videogame Graphics In Realtime

Researchers from Google AI and Tel Aviv University announced a new product that renders video game graphics in realtime. GameNGen is a new AI model using techniques from Stable Diffusion to render the 1993 classic DOOM.

Many video games are rendered using traditional animation techniques. Graphical art representing cuts in motion is animated at 60 frames per second (fps) that gives the appearance of movement. This is similar to the way cartoons are made. Game engines give video game developers the ability to animate illustrations by writing code that moves players, fires projectiles, and more.

GameNGen is a neural network that functions like a game engine similar to the Half-Life engine or Quake engine, except, it doesn’t use traditional animation techniques. The rooms, players, and other obstacles are rendered in realtime.

Is It Good?

According to GameNGen’s creators, humans couldn’t tell the difference between DOOM rendered by GameNGen versus the real thing 40% of the time. Still not perfect, but not bad either.

This new technique is called, neural rendering. Nvidia CEO Jensen Huang originally proposed this technology would be available 5 to 10 years from now, and here we are viewing it today.

How Did They Do It?

Google and Tel Aviv University researchers trained GameNGen by letting it play the game for itself.

The research team trained the AI to play at all difficulty levels. Reinforcement learning where the player was rewarded by picking up power ups and killing enemies was used. This could open a new world of possibilities for rendering complete scenes as well as new types of games. A new version of choose your own adventure comes to mind, rendered in realtime.

The GameNGen paper is available on GitHub. Check it out.

Leave a Reply

Your email address will not be published.