New ChatGPT: Hallucinations No More!

❀️ Check out Lambda here and sign up for their GPU Cloud:

πŸ“ The paper is available here:

πŸ“ My paper on simulations that look almost like reality is available for free here:

Or this is the orig. Nature Physics link with clickable citations:

πŸ™ We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky,, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here:

My research:
X/Twitter:
Thumbnail design: FelΓ­cia Zsolnai-FehΓ©r –

#openai

Joe Lilli
 

  • @19phin says:

    Have you seen the oaisis minecraft ai? It’s an absolutely insane project.

  • @LorenzoValente says:

    Wait a second… Karoly is a Dream Theater fan????? 0:31

    • @TwoMinutePapers says:

      Just went to my first concert a couple days ago…it was a Dream (Theater) come true. Couldn’t believe I finally saw them! So good!

    • @LorenzoValente says:

      @@TwoMinutePapers Oh boy, I saw them live in Milan two weeks ago! So glad you enjoyed it too, it was indeed a dream ❀ I’m sure you also loved the crazy lasers/lights show (as a light simulation researcher by trade!)

    • @TheAutisticRebel says:

      So jealous. Seen hundreds of concerts… not one of them… 😒

  • @mercerwing1458 says:

    I’ve been waiting for something like this! Google needs competition SO BAD

  • @aburak621 says:

    Karoly, the Dream Theater fan let’s goooooooo!!!

  • @meowcoo says:

    Isn’t this basically bing AI?

  • @TheAkdzyn says:

    I find myself prompting Google like an Ai more often nowadays. This will make things easier!

  • @AdvantestInc says:

    Fascinating exploration of AI’s push to address hallucinations! Especially intriguing was the idea of AI’s confidence adjustment, reminds us how essential it is for these tools to not just deliver answers, but to do so with a level of assuredness we can trust.

  • @bzikarius says:

    What do i can say? Finally fact-checking made. Something, that should be made at start, like constant memory for saved conclusions, knowledge base, where fact can be checked or aligned.
    Neural network finally become useful.

  • @telotawa says:

    it’s far more censored than google search – it’s basically useless imo, censorship kills usefulness

    • @RyluRocky says:

      I completely disagree it’s objectively far less censored than Google.

    • @thornelderfin says:

      @@RyluRocky He means Google Search not Google Gemini. Google Search is not censored at all.

    • @malditonuke says:

      Google search is censored. Look for things that you know you aren’t supposed to find. Then notice how you don’t find them.

    • @thornelderfin says:

      @@malditonuke I meant porn and similar legal things that are usually censored (definitely censored in all AI searches and all AI prompts except Mistral). What exactly did you mean by “what you aren’t supposed to find”. I hope not some conspiracy theory.

    • @Faizan29353 says:

      @@thornelderfin “Compare them with YANDEX”
      Then yes google is a bit censored…

      Say “Movie, Software Piracy sites” etc etc

  • @amitraam1270 says:

    If AI learns from human texts, why wouldn’t it hallucinate? Don’t we work with incomplete data all the time? So, that is what its “learning”.

    • @steve23063 says:

      Why should it hallucinate? It could instead say it’s very unsure and that its response is a guess that’s likely incorrect due to incomplete data. Instead it provides a response with the same sense of certainty every time, whether it’s hallucinating or not.

    • @amitraam1270 says:

      @steve23063Β  hi, have you met us humanz? πŸ˜€ We do that, too!

    • @jerrygreenest says:

      It hallucinates all the time. For common things it’s okay, as they’re easy to answer. But ask it something just slightly non-trivial, it will always give you bullshit answers, while being certain it is not wrong (but it is wrong)

    • @jerrygreenest says:

      In programming it happens all the time. Ask it something for bash, a very common widespread thing, it will likely answer properly, but ask it about relatively new shell Nushell, which is 5 years old already, – it knows about its existence in overall, but completely wrong about how it’s being used, what’s code valid, and what’s not, invalid knowledge all the time.

    • @TheAutisticRebel says:

      Hallucination is a stupid confusing word.

      All we do is hallucinate. That is the structure of neural networks.

      The map is never the territory. It’s a computation engine.

      It doesn’t HALLUCINATE it does what neural networks do.

      Mind you they are very very very VERY DUMB insulated NETWORKS that are rigid and not brain plasticity like the human brain.

      Hey are doing what they were designed to do. We just need more nuance in it’s own the feedback loop.

  • @SunnyOst says:

    One of the biggest things that kept leaving a poor taste for me when using chatgpt – when it was super confidently wrong. Excited to see it being unsure!

    • @dinhero21 says:

      sadly, RLHF will do that, it’s one of the biggest problems in AI currently (aside from stuff like, spatial reasoning)

  • @TewaAya says:

    Chatgpt still will not source from the old forums. I found a reference to what I wanted to see arounds 2003-2011 on bing.

  • @Chef_PC says:

    That Dream Theater search put a big smile on my face.

    • @TheAutisticRebel says:

      Me TOO… THAT DID NOT GO UNNOTICED!

      I love the subtleties in his videos.

      Love love LOVE THIS CHANNEL!!! πŸŽ‰πŸŽ‰πŸŽ‰

  • @ВуанНгуСн-ь5ΠΏ says:

    i like how subtly sarcastically mentioned about Nvidia stocks

  • @jsalsman says:

    Thank you for addressing hallucinations. I find the new search hallucinates much more than GPT-4o with web browsing from before a week ago. Insidiously, it doesn’t just halucinate titles, authors, and dates, but snippets which make the fake citations extremely inciting. There needs to be a button to push on hallucinated references that you really want to exist that will make some agent on the back end go out and write the paper as penance for lying.

  • @Sus_Bak says:

    Finally a competitor to Perplexity πŸ˜‚

  • @kachowbltch3585 says:

    I wonder what happens to all those sites that rely on advertising for upkeep once they aren’t getting visitors anymore

    • @TheAutisticRebel says:

      As the pie gets bigger they are less likely to be found anyway…

      Unless… hmmnnn πŸ€” what if they aren’t on the first page of Google?

    • @TheAutisticRebel says:

      Sure, I wonder what will happen to all those newspapers too!

  • @miked7373 says:

    SOOOO BAD! I’ve been testing it out for a few days. It will copy and paste the exact same response several times in a row.

    I even tried giving it false information and it almost always agreed with me! Even after I provided the correct information afterwards, it would still go with its first incorrect answer.

    SOOOOO BAD! πŸ˜‚πŸ˜‚πŸ˜‚

  • @Topnichemarket says:

    this new feature in ChatGPT is incredible! Not just regular search, but interactive answers that dive deeper, cite sources, and adapt to complex questionsβ€”this could change everything! Excited to see how this reduces those pesky “hallucinations.” What a time for AI! Thanks, OpenAI!

  • @dzxtricks says:

    Is there no “confident” parameters included in each answer so we know how much we can trust that? Considering we can ask “how confident” means it can actually be an extractable Information for users to acknowledge

  • >