• Home
  • AI

Manus AI – The Calm Before the Hypestorm … (vs Deep Research + Grok 3)

Is Manus AI the memecoin of the AI world, or legit? I’ll compare it to OpenAI’s Deep Research, Operator, Grok 3 DeepSearch and more to find out. I’ll also let you in on some of the secrets of what makes a good hype campaign, the estimated costs of Manus AI, and where it is strong. Other news (yes, Gemini image editing and research hacking, I mean you), will have to wait for a few more hours, as millions enquire about Manus AI.

AI Insiders ($9!):
Patreon Vid:

Chapters:
00:00 – Introduction
00:46 – Hype Campaign
02:40 – Single, Public Benchmark
03:12 – What is Manus AI?
04:22 – Test 1
05:12 – Cost and Rate Limits
06:15 – Test 2 vs Deep Research + Grok 3 DeepSearch
08:24 – Test 3 (not AGI)
11:10 – 4 Trends in AI in 2025
11:37 – Hype Works

Manus AI:

Xiao Hong Interview:

Gaia Benchmark:
MIT Report:

Information Report:

Hype Examples:

Mistakes:

Tools and Code:

Non-hype Newsletter:

Podcast:

Joe Lilli
 

  • @crowogenesis says:

    calling out AI hype is very welcome. I can’t stand how openAI says for literally every one of their new models how the testers are feeling the AGI

  • @AllisterVinris says:

    I was starting to think it had been a while since we had news in AI. Thanks for the update as always!

  • @carlt.8266 says:

    Checking all day if you would release a new video on everything happening. It is the only channel I am really craving for haha

  • @jackcjones7196 says:

    phil , please upload more videos. This is the only channel that I visit immediately when I come on internet

  • @jt2325 says:

    I’m ready for the signal over all the noise

  • @virgiliovargas3052 says:

    This is just another Devin moment

  • @peckish6527 says:

    Thanks for being a real one and explaining these developments with a realistic take on their capabilities/use cases

  • @blindcosine6346 says:

    Last time I was this early, humans were still wondering if AGI was possible.

  • @SirQuantization says:

    See, this is why whenever some new ‘hype’ AI news comes out I always resist getting excited until I’ve seen the latest AI Explained video.
    Thanks for your dedication and hard work as always, Phil. I really can’t describe to you how much I appreciate it. You’re genuinely doing a public service by sharing this information and also your informed opinions.

    So on that note, I have a question. Do YOU think AI is going to be completely replacing 95%+ of programmers in the next 2 years? If not, do you think it’ll be much longer after that? 3 years, 4 years? 10 years? I’m genuinely curious. I see a lot of debates about it online. Programmers seem to be under the impression that their jobs are completely safe for the next 10-20+ years but people who are deep in the AI sphere seem to think it’s coming in the next 6-12 months. What are your thoughts?

    Then I’d also ask more broadly, do you think we’re likely to see an AI in the near term that’s going to be able to do ALL computer related work? I have quite a few friends who do accounting and other types of spreadsheet work. Are their jobs in danger in the near term? If not, how long do you think they’ve got and should I be warning them about this? I don’t want to come off as an alarmist but I feel like people need to be preparing themselves for it if it IS coming in the near term.

    • @aiexplained-official says:

      I think it will replace around 90% of the code in around 18 months, but then the remaining 10% takes a further 5+ years, leading to immense productivity gains, with humans still overseeing critical code in 15+ years.

    • @aiexplained-official says:

      The real question, to your second point, is whether fuzzy general reasoning, or common sense, emerges from RL on verifiable domains. I think it partially does, but with big problems (like reward hacking, aka overoptimization). If it wholly is, all white collar jobs will have problems in 5 years. If it partially is, the more experienced colleagues will be safe for a decade+.

    • @otheraccount312 says:

      @@aiexplained-official As a dev with 8 years experience I just cannot see how humans would be trusted to oversee critical code in 15+ years.

      Reward hacking only applies to tasks which aren’t 100% verifiable. While it’s definitely a lot of work to create fully verifiable *practical* examples for doing reinforcement learning on code, it’s not impossible. (not just code but everything from start to finish of product building)
      On things that can be 100% verifiable we have a path for these systems to become superhuman. Whether that takes 1 year or 7 years I’m not sure, but 15 years is just too much time to not have built up enough problems to get to superhuman capability, along with memory/context window advancements and such.

      I guess the answer to the question “how does it take 15+ years to completely replace humans”. Memory gains like Titan turn out to be COMPLETE duds and no one can figure it out for years and years. But even with that it seems like reasoning models have longer and longer effective context windows so maybe “context windows are all you need”.

      The (very valid) argument right now is that codebases are so large and complex AI cannot possibly understand it all.
      I suspect that same argument will be flipped on it’s head before long. No one engineer understands all parts of any large codebases. With memory advancements it’s possible that AI can have a better grasp of the entire codebase in the near to medium term future. Making it so it becomes irresponsible for humans to be in the loop at all..

    • @LucaCrisciOfficial says:

      At the peace of AI advancement potentially most of the white collar jobs could be automated very soon, in part already now. That Is bringing a profound change so we don’t know how jobs will change or exactly what will happen of technology development keeps at this peace

  • @jonathanlivingston7358 says:

    The best feature of your channel is your honesty. You feel completely unbiased and unbought by anyone. Amazing!

  • @CarletonTorpin says:

    3:10 – ‘Few of you who haven’t heard of Manus Ai” . I’m one of those ‘few’, although I’m a subscriber and patreon of Ai Explained. Frankly, since I avoid Twitter entirely, Ai Explained is basically the only trusted source I use for knowledge on the Ai realm.

    • @user-cb5jv7ow6u says:

      Being reliant on a single source can be a bit dangerous but I agree in spirit, AI Explained is my favorite AI news channel.

    • @CarletonTorpin says:

      @@user-cb5jv7ow6u I agree, and I think I’ve curated as good of a trust-list as possible for now, relative to YouTube. Any others you can suggest?

    • @maciejbala477 says:

      @@user-cb5jv7ow6u it is rather hard to find non-hype merchants for sources, though. AI Explained is the only one I follow, otherwise I just ask people who use said tools and models, and use them myself. I think that’s enough

  • @RevealAI-101 says:

    Love your no BS work Phillip. Bravo 🎉

  • @toddwmac says:

    Still my goto AI channel, and why I always wait to get Phil’s perspective on “World Changing, Planet Aligning and Reality Altering” news. Thanks, mate!

  • @eTas84 says:

    It’s a very interesting product! But, for me, it also highlighted the issue with this degree of agentic autonomy — which is arguably where the magic comes from. I gave it a data analysis task and it autonomously made the choice (hidden in one of the scripts) to make up plausible results, which it referred to as ‘enhanced results’. This was an ‘else’ as a backup, if the task was perceived computationally intense, or if for any reason one of the analysis steps fails. The final output looked magical but it was also made up — something it didn’t mention in its output reports. This was fairly easy to spot when checking through its analysis scripts but I can imagine that if used by someone who is not a subject expert, this degree of autonomy would be highly dangerous in generating seemingly credible, completely fake outcomes, as the agent decides to save on compute.

  • @RevolutionNotEvolution says:

    Thanks for providing a refreshingly hype-free review without resorting to the opposite extreme. And hey, you even managed to bake in some humour 🤣

  • @sumitbindra says:

    “far easier do hyperventilate about all the cool things it can do if you don’t check their accuracy” top top class observation

  • @OverLordGoldDragon says:

    For reformatting, prompting separately works well for me:
    “Summarize these results in a visually appealing manner; include a table, and organize information into separate grouped sections”.
    (It’s a normal ChatGPT session after Deep Research.) I favor GPT-4.5 for reliability (summarizing not only accurately but intelligently).

  • @crafty1098 says:

    So kind of a hype story rather than a major advance, but not Hands of Fate, either. If I were them, I’d like some reviews like this one out there for when the hype dies down. Often, behind the hype there isn’t anything, or not that much. Having something good albeit not earth-shattering means it is actually something investors should pay attention to.
    BTW thanks for only posting news when newsworthy events happen. I know the algorithm and even some subscribers want a regular and heavy flow of videos… but I know when I see you post something that it’s worth posting about, not just “hey, everybody, welcome to thursday.” You’re the only channel of any kind that I have set to give notifications.

  • @suhhhy9701 says:

    You deserve the credit for the best AI podcaster / youtuber. Thanks for the great work, and waiting for more content 🙂

  • @danisaksson3214 says:

    My man. If it weren’t for this channel I would have real trouble following the real developments of artificial intelligence. Your adherence to your principles is why this is pretty much the only dedicated AI channel that I keep coming back to. I’ve seen others that I’ve largely blocked because they’re either all about hype and nothing about keeping it real, or they’re just generalizing use cases way too much.

    Even if a company is kind to you it’s definitely incredible important to keep your focus on the readers and consumers, so you’re doing the right thing. I don’t know if they’ll get their feelings hurt, perhaps your way of doing things limit the accessibility you have to models etc. but it also makes your stuff way more valuable to those that come here and do listen. We get a reasonably nuanced perspective that helps us when we talk with other people in our circles. Your way of doing things help people a lot, I want that to be very clear. Thank you for your work.

  • >