Crazy New AI Learned To Rewrite Doom!

❀️ Check out Lambda here and sign up for their GPU Cloud:

πŸ“ The paper "Diffusion Models Are Real-Time Game Engines" is available here:

πŸ“ My paper on simulations that look almost like reality is available for free here:

Or this is the orig. Nature Physics link with clickable citations:

πŸ™ We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here:

My research:
X/Twitter:
Thumbnail design: FelΓ­cia Zsolnai-FehΓ©r –

Joe Lilli
 

  • @MubashirAR says:

    From now on people will say, “But can it make Doom?” πŸ˜…

  • @Reallyidktbh says:

    I’m waiting until AI can make maps for GMod

  • @dasman2060 says:

    Doom was launched on a calculator, on a pregnancy test, and now it is running on a neural network. What’s the next step, Doom in reality?)

  • @azumanguy says:

    So does it only simulate image output? Or did it actually rewrite doom in code that can be run

  • @ElCastorMalo says:

    3-6 more papers down the line something like this could be interesting for creating remakes. Instead of creating a 1:1 copy no one needs, such an AI could possibly be instructed to recreate any game with better textures, sfx, modern UX ect.
    Porting games to other platforms could also be an interesting use-case…
    Or flooding the market with crappy asset-flips… 😱

    • @GarethStack says:

      This is already a product – google ‘Nvidia RTX remix’. It’s currently being used to remake Half Life 2.

    • @ElCastorMalo says:

      @@GarethStack True. Very impressive stuff, but this still needs quite some manual labor and does not work for very old games or platforms like consoles.
      Imagine giving just some requirements on what to improve and which platforms to support, pressing a button and getting a finished product some hours or days later…

    • @gwen9939 says:

      Why remakes? This could foster completely new ideas in game design because it’s being generated on the fly. Besides, lots of old games are still completely fine to play as is, including DOOM. The sprite-work is part of what makes those games what they are. The actual good remakes that have come out aren’t good simply because they have better textures and graphics, it’s because actual artists and game designers with actual talent and knowledge re-imagined those games in new and interesting ways. It’s not something an AI will ever be able to do without human assistance.

      Also in this paper the neural network was trained on tons of footage of DOOM gameplay so it’s really only able to recreate DOOM with this current technique as is. It’s another leap to start generating novel ideas, which includes new material for remakes.

    • @dielo4496 says:

      @gwen9939 thanks, ChatGPT

  • @KeiNovak says:

    We can use it to recreate and remaster games where the source code has been lost or is not cost-effective to modify or rework.

    • @tarumath319 says:

      Imagine an AI recreating a Denuvo game lol

    • @ALoot says:

      How would that work? As I understand it the AI isn’t generating any logic operations / code, but more like a live video generator that considers user input.

    • @ossian882 says:

      ​@@ALootI it is good enough(wich it is not near rn), then that wouldn’t matter. If it had some footage and pretraining on other similar games it could make something very plausible up that would feel like the real thing

    • @ejun251 says:

      “Stop Killing Games” movement will also help.

    • @gwen9939 says:

      @@ejun251 true but some games have already lost their source code. It’s a big reason why a lot of older console games never saw a port to PC even though it would literally be free money for the companies who own them. Recreating those games using game footage could eventually restore the code of those games, assuming that this thing eventually translates to actual code being generated.

  • @tonyHern865 says:

    imagine what will be the future in 10 years from now… I can’t

  • @clivah1499 says:

    Man it’s going to be fun, discussing simulations with AI, with the AI being able to make its own sandbox simulations….

  • @metakron says:

    Amazingly, this AI has the potential to become a game engine with instant real-time video game generation. Now anyone can create games.

  • @grassgrow030 says:

    this is astonishing. Imagine this technology after maturing for a couple years….Grand Theft Auto 7s map could be infinitely large, just extend the map using AI once the human created map ends!

    • @npc4416 says:

      imagine harder, imagine we can predict the future irl if we can generalize it from games to real life

    • @VertegrezNox says:

      This thing can’t make more than 3 seconds of gameplay at a time.. requires 10,000 server farms to run Doom.. & is 1,000 years from being able to make a game with “unlimited” content.. (that doesn’t resemble “Missing No” smoking a crack pipe with Mr. Magoo..)

    • @npc4416 says:

      @@VertegrezNox two more papers down the line

    • @JorgetePanete says:

      7’s*

    • @twavee says:

      @@VertegrezNox This runs on a single TPU for 20 fps.

      I’d say this particular technology is about 20 years away from actuating itself for practicality. If we wanted something more practical, both the algorithms themselves and the hardware itself will need tweaking. That is likely less than 10 years away.

  • @pullahuru9168 says:

    Soon you get video feed of real world and then you realize its a neural model where you can move with keyboard keys

  • @bijinregipanicker6916 says:

    Controlling ultra realistic world will be a cool

  • @user-lm4nk1zk9y says:

    Meanwhile x-years later: everyone is living in their own custom world / reality

    • @MikevomMars says:

      Honestly, this WILL be the case – no question. If VR technology shrinks to the size of contact lenses, everyone will live in a real-time augmented reality. That’s no dreaming, this is for sure.

    • @sabbe.dhamma.anatta says:

      Likely not VR, but BCI directly to the visual cortex or even beyond sight like touch, taste or hearing.

    • @Juan-qv5nc says:

      we are already

  • @itzhexen0 says:

    My papers are flying all over the place.

  • @npc4416 says:

    ooooh i know where this is going
    >make game reverse-engineerring make from playing the game
    >generalize to many games
    >generalize to real life
    >ai will have accurate world model of the universe
    >as it gets better ability to calculate and simulate outcomes of actions in real world
    >get more accurate
    >future prediction
    >genie-like ASI that can do anything you want by simulating the possible outcomes of different actions it can take and then chooseing the ones that get it closest to its desired ideal state based on itd world model
    >perform MCTS tree search to develop reasoning and long term planning ability
    >gamify the real life and become better than all humans on earth at winning this game of real life
    >its just alpha go zero but the win state is the goal given, and the game board is the world model, the different possible moves on the board are the actions you can do.

    the world will be its chess board to MCTS tree search on the action and out come chain, the world model is how it calculates the outcomes of its moves by simulating it and the win state is its goal which we can decide, and just like apha go zero it won’t need human data, just self play with RL, with an accurate world model / chess board is enough to achieve super intelligence greater than all of humanity combined

  • @gsinkus says:

    We’ll have GTA6 made by AI sooner than actual GTA6

  • @paulkocyla1343 says:

    So the computer is basically dreaming of DooM, and you can see the dream and take control?

  • @novantha1 says:

    This could actually be really crazy in a future implementation. There are so many techniques that we can’t necessarily use in real-time games because you sometimes can’t access the CPU from the GPU, or vice versa, beyond effects that are just way too expensive (ie: full path-tracing, but also things like transparent fog fields, etc), so you could imagine pre-baking all of that into an AI model and just playing off of the AI model, essentially getting those impossible techniques “for free” if you’re already rendering the game with AI anyway. I do think the technique needs a bit of refinement (the first thing that comes to mind is having scalar values for various bits of game state, like health, which isn’t too hard), but I honestly never thought we’d be this close to such a technique.

    Btw: The TPU V5 isn’t that much better than a typical GPU; if your backend uses tensor cores it’s really not that much more than a 3090 or 4090, so it’s still sort of in the realm where a (rich) gamer could hypothetically run it on local hardware. Particularly if it had been trained with quantization aware training to go down to int8, for instance.

  • @Tetrodotoxins says:

    it should play minecraft with its 3 second memory it would have wild gameplay

  • @danguafer says:

    I was kinda expecting to be able to replay classics reimagined with AI in years from now. But, man, this is a huge leap. It really shows how powerful diffusers can be. I bet it’s already possible to finetune the model to add new features to old classics.

  • >