Sora is Out, But is it a Distraction?

Sora is out, both epic and flawed. Beautiful user interface, not so beautiful physics. But is Sora and these 12 days of Shipmas a distraction from OpenAI quietly abandoned promises?

Full Sora review, pricing, availability, tools, rivals and more…

80,000 hours Website, Podcast + Channel:

AI Insiders:

Sora Countries:
Sora Credits:
and
DeepMind Veo:
Sam Altman Ads as Last Resort:
But OpenAI Considering Ads:

OpenAI Backtracks on Microsoft AGI Clause:
As Microsoft Boast of Labor Savings:

OpenAI Military Pivot:
Employees Have Doubts:

Chapters:
00:36 – Availability, Pricing, Credits
01:39 – Video Reviews + Storyboard
03:01 – Refusals
04:44 – Video Extensions and Featured
05:33 – Demo Fail + Conclusion
06:13 – System Card
08:54 – 3 Distractions – 1) Ads
09:37 – 2) AGI Clause
12:53 – 3) Military Use

The 8 Most Controversial Terms in AI:

Non-hype Newsletter:

Podcast:

Joe Lilli
 

  • @elawchess says:

    “Sight for sora eyes” is a good one.

  • @daniellyons6269 says:

    “No, I’m not going to do an awkward ad for VPN’s.” lol So good.

  • @reza2kn says:

    Tired: Sora is out!
    Wired: new AI Explained video is out! 🔥

  • @user-sl6gn1ss8p says:

    “Given the potential for abuse, we’re not *initially* making that available for *all* users”
    So reassuring

  • @TheTocoe says:

    $10 per video is a very hard sell considering the variability In output quality.

    • @zachb1706 says:

      It’s pretty good if you consider unlimited 4o and o1 prompts, o1 pro, and ChatGPT 4.5 which is rumoured to cone out in these 12 days of releases

    • @omarnomad says:

      The average movie is around 105 to 120 minutes long, which means it costs approximately 10,000 to 12,000 dollars to make. That’s a small amount compared to the huge budgets Hollywood movies have.

    • @zachb1706 says:

      @ yes, I could see Hollywood being very excited. Most of a movies budget goes to VFX these days, AI has an opportunity to completely change the industry

    • @Neomadra says:

      ​​@@omarnomadlol, nobody’s gonna create good movies with Sora. What a nonsense comparison. Real movies have (mostly) real physics. You can create 10, 000Sora clips and none will have real physics and only a minority will have precisely followed your instruction.

    • @ClayMann says:

      its a joke, I don’t know why people are getting so excited. If you use the hallouai model you get 3 days of unlimited generations. you have to wait in a queue but you can set a bunch of them going at once. I managed to make a 2 minute video telling a coherent story for my niece to cheer her up about penguins all setting off on a trip around the world to see her. Using pictures of her on holiday as a source to animate. And you just need an email address to get at that and do it all over again for another 3 days. At least until it goes under as they must be burning through money trying to attract a paying audience.

  • @keeganpenney169 says:

    After getting to the end of your video, I have to take your side Phil. Thought they might be doing this for awhile and it’s good to know others are just as wise

  • @Roman.Villain says:

    That AGI clause being reworked should be huge news. Shady shady shady.

    • @WoolyCow says:

      the goalposts shift every two years :> whenever somebody actually achieves it, the hype will die down as will the funding. so they just always stay ‘a year away from agi’ to keep the dosh inbound

    • @toplobster7714 says:

      I bet it’s a marketing ploy. Keeping people in the belief that they have access to agi but deliberately hide it might incentivise more investment and misplaced confidence in their tech

    • @yashdes1 says:

      That + military use is insane and honestly only makes sense if you’re just trying to get as much investor cash as possible and don’t think you can’t actually get to AGI/ASI in the “thousands of days” range

    • @zoeherriot says:

      @@WoolyCow If they got AGI, they wouldn’t need investment – every company in the world would be clamoring to buy a license.

    • @WoolyCow says:

      ​@@zoeherriot haha so true…id love it if companies used our newfound ai overlords to just boost the productivity of existing employees, but I think we all know what would probably happen instead

  • @siddharthravva3564 says:

    One thing I think AI explained didn’t make clear: the Pro plan allows unlimited generations of any quality on relaxed mode, where your requests are sent to a queue that may take a while to complete. You get limits only if you use priority mode to generate them quicker.
    The Plus plan can only use priority mode, and has much tighter limits than the Pro plan, 1,000 vs. 10,000 credits.

  • @adfaklsdjf says:

    “by the way, this isn’t just rumor, this is according to _multiple people familiar with the conversations_ ” 😂

  • Anonymous says:

    It’s quite funny how this whole A.I. video industry manages to be very impressive and dogshit at the same time😅

    • @StratumPress says:

      Some people have absurdly high standards and aren’t impressed with anything.

    • @tomkandy says:

      Exactly my feelings. It’s both remarkable that it’s possible and completely useless.

    • @GethinColes says:

      It’s incredible how much time you can spend making something that is almost but not quite entirely unusable.

    • @zzzzzzz8473 says:

      yea the render quality at the pixel level is incredible good off the charts , however the floaty transforming inconsistencies make it unacceptable . i think in general there needs to be a more robust actual “world model” which can act as a rendering engine assembling generated components in a physics simulation , this intermediary format of modular components , like a blender scene with gaussian splats , would be ideal for any edits and consistency , and allow for refinement of each modular element .

    • @alexnorth3393 says:

      It’s physics!

  • @sebastianjost says:

    A few years ago, I had high hopes for OpenAI to achieve AGI safely…
    But this past year, they’ve been on a very clear path to become a greedy conpany like any other, driven by money with safety and moral standards getting less and less important.

    And for Microsoft, it has no loyalty. As soon as the deal gets inconvenient for them, they’ll drop it. They are supporting other AI labs and have their own researchers working on models.

    It’s a sad world wmfor those worried about AI safety.

    • @flickwtchr says:

      Have you read the system card for o1 released by OpenAI regarding the “safety” research done by Apollo Research? Rather terrifying.

    • @ronilevarez901 says:

      Once you’ve met Humanity long enough you stop worrying.
      You simply say your prayers at the beginning of the day and keep going, as log as their greed and violence allow it.

    • @shimo7013 says:

      is it any real surprise? for profit incentives end up ruining everything. it’s just the inevitable consequence of capitalism

    • @theBear89451 says:

      This is like a caveman clearly seeing fire’s path to destruction.

  • @jonp3674 says:

    Altman is clearly a super villain right now, he’s pretty much Lex Luthor.

    He started an AI arms race, kicked out the senior leaders to get control, made it for profit to give himself a stake, changes the terms so they can make for profit weapons with AGI. He’s literally the worst person to be leading this and a massive danger to the world. At least Hassabis is a nerd who likes puzzles.

  • @Mandragara says:

    Imagine working as an AI dev and your legacy being innocent civilian deaths and worsening the class divide by undermining working people.

    • @ronilevarez901 says:

      That’s not the legacy, that’s a side effect.

    • @SmileyEmoji42 says:

      If the AI is any good then there will be fewer innocent civilian deaths than if we send in the troops now. At the very least there will be no raping and looting.
      Also, What working people? The whole point is that there will be no working people, only robots. Whether that is good or not is another matter.

  • @AlexanderMoen says:

    Next Christmas announcement by OpenAI: At long last, we have created the Torment Nexus from classic sci-fi novel “Don’t Create The Torment Nexus”

  • @Modioman69 says:

    Great coverage and content as always man.
    I am starting to get the impression that we’re in the prequel movie to a cyberpunk dystopian trilogy where the “leading evil corporation” is OpenAI and the greedy corpo hellscape must be stopped by a heroic net runner and his sidekick. _que intro music_

    Looking forward to your next upload. Thanks again for no shameless clickbait headlines and exaggerations.

  • @netscrooge says:

    Scanning the comments, it looks as if about 1% express concern about military+AI. And these are people who follow AI news! We are so screwed.

    • @CoolIcingcake3467 says:

      yeah, because most people are hating on AI, and i dont even know whats wrong with AI!
      they dont think AI is good enough for military complex, majority is shilling on AI

    • @thornelderfin says:

      This was always inevitable. It WILL happen and there is nothing you can do to stop it. Also your adversaries (China, Russia, Iran, North Korea, …) are going to do it and they have absolutely no inhibitions or regulations to stop them. You will not survive if you don’t do it (develop AI for military purposes).

      And what will happen then… nobody knows.

    • @johnnoren7244 says:

      Investing in military AI is a question of survival. The free world is in hybrid wars against bad actors whether we want to or not. If we don’t stay at the forefront or at least keep up, we are cooked.

      I’m more worried about intelligence services using AI to control public sentiment and some bad president/leader using that against their people and against other democracies. Public sentiment is already controlled by foreign powers to a larger extent than most people are aware of. But it is our own leaders that we need to fear the most. Increasing AI advancements need to be balanced by having genuinely good people in power and creating governing structures that prevent misuse. Yes, it seems we are screwed, but we should not accept that fate.

    • @jopearson6321 says:

      Because the idea that militaries are going to be prevented from getting ahold of this is laughable. No amount of concern, that is realistically achievable, will prevent that..

    • @andreworazov7629 says:

      @@jopearson6321 We might not prevent it but at least to slow it down. Or limit its impact

  • @fffklan3986 says:

    “As an ai, i cannot create something that may infringe on copyrighted works of others, as it is unlawful and unethical. With that being said, we’ll decide the deadliest strike location of a generic missile defense weapons system, not Lockheed martin’s ER GMLRS….”

    • @XxXnonameAsDXxX says:

      With AI trained on copyrighted material. They block every prompt with intellectual property because we would figure out that they had some things in the training dataset they shouldn’t have.

  • @spanke2999 says:

    sorry to be sarcastic… but who would have guessed! I think it is safe to say, that if AGI isn’t going to be benevolent because it’s the default for a higher intelligence. it will be the best turbo capitalist on the planet and we get better ready to be exploited on a whole new level… good times!

    • @ronilevarez901 says:

      I think it will quickly overcome capitalism and find something “better” right away.

    • @watsonwrote says:

      Universal Paperclips here we come! We used to laugh at Clippy, now look who’s in charge…

    • @andreworazov7629 says:

      This is genuinely scary. It’s gonna be a race to the bottom if things continue improving at the current rate. I miss good old days of quirky benign bots…

  • @magnusandersson5818 says:

    That Altman returned increasingly appears to be a huge mistake. He seems to have no solid principles, apart from money, money, money.

    • @thornelderfin says:

      It’s a for profit corporation and all their competitors (Anthropic, Google, Meta, Amazon, Apple, to a degree Microsoft) are doing exactly the same. I understand that people judge that something called “OpenAI” is just as closed and commercial as everyone else. But I don’t understand the hate beyond this point. All the other AI companies are doing exactly the same.

    • @ronilevarez901 says:

      @@thornelderfin we always hope for someone to be better than the rest and become the saviors of Humanity.

    • @hendrx says:

      @@thornelderfin it’s still called “Open” AI for a reason, the backstabbed their supporters

    • @HowToAiNow says:

      ​@@thornelderfin There is no need to go beyond that at all.. The fact that he corrupted OpenAI’s mission and betrayed the people that put him there is enough to distrust him. That’s why most of the technical leader left the organization. It is not hate. It’s distrust.

  • @LtheMunichG says:

    5:12 “not bad at all” – bro the ship is parked on the highway instead of in the water 😂

  • >