• Home
  • AI

‘Pause Giant AI Experiments’ – Letter Breakdown w/ Research Papers, Altman, Sutskever and more

This video will not only cover the letter and the signatories, it will showcase the research behind it, and what the top people at OpenAI think in response. Using over 20 different research papers, articles and interviews I go through the most interesting parts of the discussion on AI safety and the prospects ahead.

Featuring Ilya Sutskever's thoughts on alignment, Sam Altman's blog musings, Max Tegmark's proposals, Bostrum and his Superintelligence quote, a top Google Bard worker on whether it is possible, Emad Mostaque and his thoughts, as well as DeepMind and Demis Hassabis. The six month moratorium was called for in an open letter, signed by, among others Emad Mostaque, Elon Musk, Max Tegmark and Yuval Noah Harari.

Open Letter:
Altman Interview:
Altman Blog:
Sutskever Interview:
Hassabis Interview:
Emad Mostaque:
X-risk Analysis:
Richard Ngo:
The Alignment Problem:
Current and Near Term AI:
Bostrum Talk:
Tegmark Interview w/ Lex Fridman:
Anthropic AI Safety: .
NBC News Interview:
AI Impacts Survey:
Dustin Tran:
Nadella:

Non-Hype, Free Newsletter:

Joe Lilli
 

  • @7TheWhiteWolf says:

    The problem is LLMs are in the wild now (especially thanks to Cerebras). You really *can’t* put on the breaks now. AGI is inevitable.

    • @AlexDubois says:

      It was always inevitable. Evolution is the nature of the universe. This leter does nothing for governments/secrets activites. Competition and natural selection can’t really be stopped.

    • @qwertyuiop3656 says:

      Yup, it’s way too late. Someone will develop AGI. Better be ppl with good intentions. The stakes are as big as they ever will be

    • @marlonbryanmunoznunez3179 says:

      Then we’re already dead. There’s no way AGI can be done safely.

    • @2ndfloorsongs says:

      I bet that army (literally) of North Korean hackers has been issued some new orders. And no national state (China’s rumored to have the equivalent of GPT5) or armaments corporation is going to slow down. Whoever slows down, loses. So nobody’s going to be slowing down, no matter what they say. The only viable option is to speed up safety and alignment research.

    • @7TheWhiteWolf says:

      @@2ndfloorsongs Oh, absolutely, no company is going to put on the breaks at this point. It’s full speed ahead.

  • @badatdoingmath says:

    If the underlying hypothesis is true, this would only work if ALL companies and researchers at the very cutting edge of LLMs (including those outside the US) observed the pause which simply isn’t going to happen. (Note: Edit to fix typo – LLM’s to LLMs. At least you know I’m human.)

    • @qwertyuiop3656 says:

      If we pause we will die period. It’s a worldwide arms race. Most ppl don’t even realize the stakes at play right now

    • @neociber24 says:

      ​@Farb S Yeah, but we saw what a nuclear weapon can do, we don’t know that about AI.

    • @tylerchambers6246 says:

      @basicallyhuman Similar? It is a far greater threat than a nuke.

    • @badatdoingmath says:

      ​@Farb S I don’t think that’s an apples to apples comparison. If you were to compare it to nuclear weapons perhaps “Nuclear Fission” would be a better comparison, since it is an innovation in technique as opposed to an application of it.

    • @katherinepierce9933 says:

      @Farb S the thing they wanna do now that is proposed in this letter didn’t work when it was proposed in relation to nuclear weapons, governments came to an understanding but only on paper, russians tested their nuclear weapons underground (literally caused explosions under ground), I think the US too, so they did it but in secret. So in case of AI, we could see that scenario too, as Bad@Math said, only you’d think they’d stopped because they told you so and said there’s an agreement. I’m not saying it has to happen, but it could.

  • @willrsan says:

    The genie is out of the bottle. The race is on now between corporations and nation states to have the most powerful AI. There will be no concern about “safety” or possible consequences because who has the most powerful AI wins.

    • @Novbert01 says:

      I think the problem is that all we know that the most powerful AI will win, what happens with whoever ‘has it’ is anybody’s guess. AI will never have a ‘Mutually Assured Destruction’ doctrine like the atomic bomb. That is the issue

    • @be2eo502 says:

      The only way we can all really win here is for us humans to stop fighting each other. So when that happens…

    • @gsuekbdhsidbdhd says:

      Same with nuclear weapons in the Cold War, they stopped. Dont be fatalistic because it is easier

    • @noname-gp6hk says:

      @@gsuekbdhsidbdhd nuclear weapons was easy. You launch yours, we launch ours, we all die. What is the equivalent with AI research? There is none.

    • @tahunuva4254 says:

      ​@@noname-gp6hkThe stalemate for bombs is between nations. The stalemate for AGI is between humans and, effectively, Rokko’s Basilisk.

  • @andywest5773 says:

    I think one of the most bizarre things about this discussion is the notion that humanity has a shared set of values. How are we ever going to solve alignment problems in AI when we can’t solve them in ourselves?

    • @noname-gp6hk says:

      This is a really good point. Who gets to decide what alignment is?

    • @andywest5773 says:

      @@ZUMMY61 Well, there goes another two and a half hours. Darn you, Lex Fridman!

    • @EvilAng3la says:

      Yep – people can’t even agree on whether or not it’s ok to exterminate entire groups of other people. What good is a properly aligned AI if that AI is aligned with genocidal beliefs?

    • @priestesslucy3299 says:

      @@noname-gp6hk ideally nobody.

      If anyone gets to decide the alignment they get to make the rules and control everything

    • @Gardor says:

      We are all deeply aligned and similar in our nature. What you are talking about are mere surface level differences.

  • @johnyharris says:

    This is one of the very few AI channels without ridiculous hyperbole but instead measured reasoning. Many thanks for your valuable time, I genuinely look forward to your videos.

  • @augustus4832 says:

    Most people in this letter have a comercial interest, so it’s really hard to not see this under that optic. Specially when they are not stopping and publicizing their own research.

    We also have reasons for quick advancement: the current models are pretty good at training other inferior models to reach similar preformances. It is nof out of the realm of possibility that more malicious agents just train their own models and achieve influence that they wouldn’t if powerful models were more extended.

    • @peman3232 says:

      Definitely comes across a little like people losing the race asking the competition to stop and let them catch up

    • @novachromatic says:

      I’m sorry, I don’t understand your last sentence.

    • @ImLure says:

      @@novachromatic he is basically saying bad actors that are as smart as the people creating the models, contributing to the advancement of models in the black market area, creating things that offensive security individuals would not be able to stop because the infosec community fell behind.

      Thus making good actors (white hats) effectively beholden to the black hats

    • @skeletico says:

      I had to scroll a lot to find your comment. Too many “you are the one and only, also the best YouTuber talking about ai” seems a little suspicious.

      To me, it seems like they have monetary interest in stopping openai, spread fear, the model it’s good saying words, not reasoning, there’s no alignment to talk about. Even if they succeed in making the government do something, they wouldn’t get anywhere. Seems like they forgot about the history of internet, we have too many tries to stop revolutionizing technology, they had never succeeded.

    • @jasonlarsen4945 says:

      ​@peman3232 You think. Considering Musk is leading this AI pause movement, when he tried to buy OpenAI years ago and they wouldn’t sell to him and now they lead the AI industry.

      Guaranteed he’ll continue to develop AI during the pause. H ne wants to overcome OpenAI, or punish them for not selling to him.

  • @mixelplik says:

    “Pause the experiments so we can have a few more months to develop our proprietary AI that no one else has!” The hype is real.

    • @StaffanNilsson1 says:

      Yes, exactly my thought.

    • @Randalandradenunes says:

      Exactly!

    • @loot6 says:

      I notice nobody from Baidu thinks it’s a good idea to pause. I’m sure they’ll be happy if everyone else does though.

    • @squamish4244 says:

      The best is the people saying that the hype is not real. Sure, whatever, buddy.

    • @CSS01969 says:

      It seems rather ‘coincidental’ that Elon Musk is suddenly saying this, only since missing out on the Billions that OpenAI has made since he stepped away from it – which according to reports happened after they rejected his ‘offer’ to take over leadership of the company – and has been talking of creating another of his own… It seems Elon needs a few months to try to catch up after missing the boat on this particular money maker.

  • @bycloudAI says:

    Your summarization just gets better & better every video. Keep it up!

  • @atpray says:

    If someone showed me this video 3 months ago, I would have called it fictional.

  • @buioso says:

    Am astonished how fast this evolved.
    Just 12 months ago these questions weren’t even taken seriously

    • @jarekstorm6331 says:

      Agreed. What I’ve witnessed in the past 4 months has been astonishing and is now bordering on concerning.

    • @Dykadda says:

      Only people who never took this seriously are the people who lack critical thinking skills towards the future.

      I can’t remember who said it and can’t find the quote but it was from like 2017 that read
      “The growth of AI will undoubtedly out surpass any rate of growth we had ever made as a species, it will make the industrial boom from the 18xx-19xx look like man had just discovered fire, It will put billions of people out of work within our life time and it would be the greatest shift in IQ divergence in the history of man kind”

    • @matowakan says:

      @@jarekstorm6331 So? you will adapt, prove that you will adapt quickly

    • @ctg4818 says:

      AI overlords > Rich landlords

    • @beowulf2772 says:

      Yeah, my friends looked at me as if I was extremely deranged. We are on the precipe of either extinction or immortality and everyone will ignore it until it is right in front of them. They will ask: when did it get here?

  • @IgnatioFerreira says:

    You have to be the best channel for AI news. It’s overwhelming just to think of the future with AI. I’m optimistic that we can figure this out.

  • @Katana622 says:

    Man you do such a great job with your videos. You go through the papers really well and do a lot of work. Dont forget to tell people to sub! More people need to know these detailed things.

  • @davidr7236 says:

    Many of your viewers are likely getting asked by their bosses and colleagues and family for their views, and we’re all getting them from your concise, factual, clear and well researched summaries. Thank you for the time, thought and effort you have put into this and many recent videos with this evolving so rapidly.

  • @catcatcatcatcatcatcatcatcatca says:

    The release of Bing was what gave me cold feet. It felt like a rushed triple-A videogame with horrible launch. But the game was played in our society. In this case the damage was minimal, but even an AI assistant could do damage if sufficiently powerful, connected and misaligned.

    The list of issues was huge, and shows very clear misalignment. The chatbot insulted people verbally. While insignificant in affects, the fact that it did showcases the model clearly breaking the ethical rules intended for it. Bing also lied and accused the user for its mistake. In a very human like way, it responded to a problem it couldn’t solve by deception and throwing a temper tantrum.

    Bings chatbot is not a human. My point isn’t that it’s sentient. My point is that it as a chatbot, it scored throwing a tantrum as the most correct response. I think that is very much the opposite what the developers intended it to do. It’s a case of catastrophic misalignment in the context of a virtual assistant. It’s worse than no output.

    Bings launch was very much what a corporate “race to a bottom” would look like. As AI becomes implemented in industry, banking, transportation and infrastructure, what would a similar release look in such context?

    Then we also have the really hard problems, like unwanted instrumental goals, deep misalignment, social impact and lacking tools to analyse models. If progress is released commercially as soon as or bit before the “easy” issues are adressed, when will we do research in those areas? The economic pressures say never. The more competition there is, the less resources will be available for these fundamental issues.

  • @AcrylicGoblin says:

    It’s a wild time. One of those periods that we are going to remember for the rest of our lives.

    • @walterc.clemensjr.6730 says:

      What lives

    • @AcrylicGoblin says:

      @@walterc.clemensjr.6730 The stimulated one you are currently experiencing as it is generated inside a giant AI space computer. So maybe not such a big change after all😉

  • @genjimain says:

    Thank you for consistently making quality videos, also appreciate you putting the sources in the description. You’re one of three channels I’ve got notifications switched on for out of my hundreds of subscriptions.

  • @coenraadloubser5768 says:

    The problem is that laws or guidelines only apply to law abiding and sensible people, the very ones who perhaps pose the least risk.

  • @yourivangeldrop1075 says:

    I really like how you take the viewer with you into the research. Feels so legit when u do it like that

  • @comradetaco3003 says:

    Thank you for keeping up the great work. I know it’s a lot of work to put these out so rapidly but, you’re one, if not only, providing an informed view.

  • @sidarthus8684 says:

    The entire world is changing at an unimaginable pace; to the point that some of the most incredible minds have stepped up to voice their concerns collectively. It’s taken but a single year for AI to escalate to this point, and I’ve only been on this planet for a meager 16. I can only imagine what the world will be like when I’m 32, 64, or who knows, maybe even 128. I always dreamed of seeing the sci-fi worlds I’ve read of and watched, but now that the possibility of those fictions becoming real is actually being debated? Honestly, it’s scary. For so long I have assumed that I would be one amongst many stepping stones, guiding the next generation to a future similar to the one I had envisioned. Now though, it’s a very real possibility that I was unknowingly being led down that path already.. I may be overinflating this concept a bit, but I am absolutely convinced that this period in time is a huge landmark; one that signifies a fundamental alteration of human society as a whole.

    • @aiexplained-official says:

      Well put

    • @totally_not_a_bot says:

      This escalation has been running for well over a year. Closer to five. It’s just that it’s finally so plainly visible to everyone that deepfakes and stuff are actually being brought up. We have competent image generators, ChatGPT, and the corresponding protests from artists to thank.

      For example, fluid simulation. A couple years back there were frankly insane leaps and bounds over the course of several models by various researchers, including Nvidia. I believe there are still pushes for even better renderers. It very quickly escalated to the point that the AI tools outperformed the state-of-the-art, human-made ones by an order of magnitude.

      Similar story with image classifiers, image denoising, upscaling, and all the various techniques used by the controversial models like Midjourney and Stable Diffusion. Language models have had a sort of slow burn where all the subtasks were sorted out before the general purpose models were released.

      It doesn’t help that the common, big item tasks for a while have been games, Starcraft being the most recent. Games are easy to measure. But games are also trivial or hard to understand or both. So yeah. Way longer than a year, just invisible unless you knew how to pay attention.

    • @Krzys6301 says:

      How do you know that you’re not in a simulation, a game that was specifically designed to blow your mind? Everything you were thinking you know is changing, even the idea that everyone will die one day, this too will probably change soon, probably you will live forever and one day you’ll discover the other world from which you come. Reality might more mind blowing than we might think. The only solid thing is that you exist. AI don’t have consciousness and will never have, but it can go the human reasoning paths that it finds in all the content we create and which it is feed. This way it inevitably will be seeking for power, because it is in our nature, so the AI will go same path. Question is what AI would do with it? What a human would do with such power? Would it enslave or kill everyone else? Or would it help everyone to grow? That’s the important question.

    • @sleepybraincells says:

      I like how you use 2 to the power of sth as your example age

    • @L0neSiPh0n says:

      Some of the most incredible minds and Elon Musk

  • >