• Home
  • AI

This AI Learned Boxing…With Serious Knockout Power! 🥊

❤️ Check out Perceptilabs and sign up for a free demo here:

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:

📝 The paper "Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports" is available here:

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here:

Thumbnail background design: Felícia Zsolnai-Fehér –

00:00 Intro – You shall not pass!
00:49 Does nothing – still wins!
01:30 Boxing – but not so well
02:13 Learning is happening
02:39 After 250 million training steps
03:10 Drunkards no more!
03:29 Serious knockout power!
04:00 It works for fencing too
04:20 First Law of Papers
04:43 An important lesson

Károly Zsolnai-Fehér's links:
Instagram:
Twitter:
Web:

Joe Lilli
 

  • @marklondon9004 says:

    Now I want to see an AI version of Robot Wars. Well described combatant rules, unlimited training. Last Bot standing wins.

  • @technorazor976 says:

    0:50
    I mean, I would also stop playing if my friend suddenly had a seizure.

  • @oldmandoinghighkicksonlyin1368 says:

    AI has learned to never interrupt when its opponent is making a mistake.

  • @astryl-01 says:

    we should try to make a simulation where movement costs them energy to see if they would avoid too many small and fast movements

    • @SSingh-nr8qz says:

      This is a very good comment. In the real world people don’t have unlimited energy reserves. That’s why most average people can barely fight for 3 minutes on average before gassing out. Since muscles require energy and different actions use different amounts of energy, your idea makes a lot of sense for realism. In the real world, if you had unlimited energy, you would act completely different.

    • @tesfatesfaye6262 says:

      Very true

    • @lazarus8453 says:

      probably it will make computing 1 year

    • @larion2336 says:

      Agreed. In addition to that the reward mechanism needs to be more complex than “I touched the enemy and didn’t get touched.” I note the bots are just tapping each other for the most part, though there was that one decent knock. But things like momentum should ramp up the reward considerably so that a proper full contact punch is preferenced over light jabs (although this is actually not the case for fencing).

    • @sumbody694 says:

      It should also have some kind of way to alternate punches occasionally because something that was obvious is that the computer is always going to go for the most “advantageous” move it can seeing as it swung with the leading hand 100% of the time showing it has no interest or understanding of diversionary tactics and advanced problem solving.

  • @Teth47 says:

    An important thing to remember with these learning algorithms is that they’re going from “less skilled than an infant” to “basic boxing” in *a week*. It sounds like a long time because we’re used to computers operating in milliseconds, but imagine going from not realizing you have limbs to walking around and throwing punches in 7 days. That’s a huge amount of learning, even in this simplified system.

    • @DontfallasleeZZZZ says:

      It’s a week of computing time, it’s not a week inside the simulation. The simulation goes on for a billion steps at 30 Hz, so about 30 million seconds or about a year around the clock, or 4 hours of playtime every day for 6 years, which is interesting because it’s in the same ballpark as the time humans require.

    • @gwills9337 says:

      @@DontfallasleeZZZZ great points, I was wondering about that!!

    • @SSingh-nr8qz says:

      @@DontfallasleeZZZZ Time is relative based on perspective and scale of where it’s measured. A great example is Geological time, vs the span of human existence. Then you go out to space and have all kinds of weird time effects like time dilation. In this case we have “computer” time.

    • @nothingTVatYT says:

      Furthermore even if we talk about a “big” neural network it’s nothing compared to a human brain. Also the sensors i.e. the input variables are hardly comparable to what we can use and train on which of course goes hand in hands with the many sensory values to process.
      In the early days someone working with AI claimed we try to make a creature with insect-like neuron knots behave intelligently. Considering that it’s amazing what can be done.

    • @haraldtopfer5732 says:

      But why do they always start from scratch. Couldn’t they utilized pretrained networks or classic algorithms for a starter and build on that?

  • @VHenrik007 says:

    As a fencer I’m really looking forward to what will it really evolve into. Just like when AI started becoming better in chess, we learnt a lot from them, and I believe same can be applied to more physical sports. What a time to be alive!

    • @unknownr3802 says:

      the thing is that you can actually learn loads from AI, you can literally makes one live infinite lifetimes just to do one thing, the only problem is most of them create some glitch that is impossible for humans to do lol.

    • @yevgeniyvovk9788 says:

      @@unknownr3802 then improve the physics of the simulation and the limitations of the bot so it approaches reality. But you do have a good point. Turn based strategy games are somehow fundamentally different than sports for learning from AI

    • @unknownr3802 says:

      @@yevgeniyvovk9788 that is true, i saw a paper one time where it was sort of physics based, it had joints that get locked after a certain angle etc but the machine found a way to glitch the joints into spasming the joint then flying the machine into a certain direction, then the machine learnt how to control where it got flown to, so sometimes even adding more physics and limitations actually help you robot glitch it, but you could definitely improve it to some degree.

    • @aminmw5258 says:

      We can learn from AI by their decision making capability not their executions. They will be perfect in term of executions or find glitches in the simulation
      That is why we can learn a lot from turn-based game.

      Talking about decision making, sports have been using data science for quite sometimes now.

    • @creestee2229 says:

      I’m so sorry to hear that ur a fence

  • @wongwu says:

    “I fear not the AI that has trained in 10 billion simulations once. But I fear the AI that has trained in one simulation 10 billion times.” – Bruce Lee probably

  • @qhc157 says:

    “Everyone has an algorithm ’till they get punched in the mouth.” – AI Tyson

  • @dionyzus2909 says:

    “this ai showcases agents that can learn boxing”
    red guy falls for no reason whatsoever
    “wait a minute — that’s the soccer ai, sorry”

  • @ElectricFuture says:

    Should’ve taught Tyron Woodley some of this

  • @Soulsphere001 says:

    I think the reason the blue AI kept losing when the red AI fell over is due to over training for one possible outcome. The blue AI expected an attack and only knew how to win when it was being attacked but didn’t know how to proceed when not attacked. It overcompensates for the expected attack and then falls over.

  • @chrismullarkey3181 says:

    This is the second video I have seen from Two Minute Papers. Excellent cutting edge content. Well done.

  • @rian8024 says:

    The funny thing is that due to their bodies having the same measures, they’ve learned that cross countering was the best strategy. It would be interesting to see the same experiment, but with different measured characters.

    • @pierrelebonet6053 says:

      Yes I guess small differences in mass would create a much more divers game.

    • @abdelhakyac7285 says:

      where are those cross countering, i see none

    • @jimmythe-gent says:

      Yes, exactly. Also, they’re point fighting. Not trying to win by disabling the opponent. Id like to see this again with better AI boxers, and each have a health bar. Max damage for certain headshots, certain bodyshots (liver, solar plexus maybe). And then see what they come up with. Will they take little jabs to the face to land a huge cross to the chin?
      That would be amazing to see the techniques after thousands of simulations

    • @raylopez99 says:

      @@jimmythe-gent Rope-A-Dope?! BTW using an AI-engine they came up with a chess engine (AlphaZero) that beat all other more-algorithmic engines and ‘solved’ the game of GO, and poker, that previously were thought to be immune. Progress for the machines, bring ’em on!

    • @jimmythe-gent says:

      @@raylopez99 yeah the ai machine beat that other chess ai- i think it was called “fish …xxx..something”

  • @k-fedd says:

    Once the AI get so advanced you should save copies of individual behavioural patterns, name them, and start an arena. Maybe live stream fights? Would this not be awesome?

  • @JTKatz07 says:

    This was strangely motivating we all start off stumbling but over time we learn and grow I’m glad these two stickmen can now box

  • @samc2950 says:

    I’d love to see a boxing simulation where one character has a shorter build or shorter wingspan and see how it adapts to its disadvantage

  • @joesomebody3365 says:

    Would love to see a future where AI in video games can dynamically adapt to what your doing, hopefully without them becoming impossible to defeat.

    • @tatsuke-sama3946 says:

      I see a rise in controller buying

    • @zsomborszepessy4351 says:

      thats already happening, been happening for a while now actually

    • @hxhdfjifzirstc894 says:

      That would be interesting… a video game that gets harder, the more you play against it. I think this would really help people to learn strategy.

    • @frankjaeger1711 says:

      ⁠​⁠@@zsomborszepessy4351I know of a few games that have that adaptive difficulty, but it’s usually a mechanic and not ai actually learning to kill you. I know in Tlou for example, enemies will flank you while others will aim at where they think you are until you make a move. So many times I’ve panicked when that would happen so I thought I could be quicker and headshot an enemy, but they have the advantage already aiming at me so I get hit 90 percent of the time. Enemies will also learn to try and stealth attack you especially when they’re the last ones standing. I know games like stalker and the old fear games have some of the best ai, but idk how well they adapt to what you’re doing. The ai in tlou 2 is really tough on the hardest difficulties, but I think that’s mostly because they basically have aimbot. If I’m in their sights, they will almost always headshot me no matter how much I’m moving. You have to basically catch them by surprise since most head on fights will be game over.

  • @splintedvibesvibes1591 says:

    “After 130 million steps of training, it can not even hold it together”

    My life

  • @alexeibenhauss7217 says:

    The rear hand/power hand should offer an increased reward (just like a real cross offers increased power and damage if it lands) over the jab hand to stop it from becoming a stiff jab stalemate every exchange, and having fighters with slightly different dimensions as many others have said would also be a good change

  • @HeWhoLaugths says:

    Having taught martial arts for a few years, it was surreal seeing how the ai was moving at different stages of their learning process. It looked remarkably like someone actually learning to fight.

  • >