New ChatGPT: Hallucinations No More!
β€οΈ Check out Lambda here and sign up for their GPU Cloud:
π The paper is available here:
π My paper on simulations that look almost like reality is available for free here:
Or this is the orig. Nature Physics link with clickable citations:
π We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky,, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here:
My research:
X/Twitter:
Thumbnail design: FelΓcia Zsolnai-FehΓ©r –
#openai
Have you seen the oaisis minecraft ai? It’s an absolutely insane project.
Thanks for the recommendation, I haven’t seen it yet. Super cool – can’t wait to have a closer look!
@@TwoMinutePapers new video? π― I hope so!
@skyless_moon I do partialy fear the might of the scholars killing the relatively short wait times currently enjoyed.
pretty insane, it’s basically a public version of GameNGen
β@@TwoMinutePapersIt’s INSANE!
Wait a second… Karoly is a Dream Theater fan????? 0:31
Just went to my first concert a couple days ago…it was a Dream (Theater) come true. Couldn’t believe I finally saw them! So good!
@@TwoMinutePapers Oh boy, I saw them live in Milan two weeks ago! So glad you enjoyed it too, it was indeed a dream β€ I’m sure you also loved the crazy lasers/lights show (as a light simulation researcher by trade!)
So jealous. Seen hundreds of concerts… not one of them… π’
I’ve been waiting for something like this! Google needs competition SO BAD
You might want to give Perplexity a try. In my subjective experience, it’s been better across the board.
I mean Microsoft’s bing ai/copilow was using gpt4 with web search for at least a year now, it cited sources too
I hope OpenAI makes a search site. I would use it from day 1
β@@Boothieproit still always felt like a downgrade to ChatGPT
Like Google “Learn About”?
Karoly, the Dream Theater fan let’s goooooooo!!!
Isn’t this basically bing AI?
Similar but not identical
Yes
Bing ai more worse
Perplexity AI seems better
@@bapo224 I agree, i have the pro subscription got 1 year for free and its really nice.
I find myself prompting Google like an Ai more often nowadays. This will make things easier!
Fascinating exploration of AIβs push to address hallucinations! Especially intriguing was the idea of AI’s confidence adjustment, reminds us how essential it is for these tools to not just deliver answers, but to do so with a level of assuredness we can trust.
A level of assuredness Google* can trust. These models can be biased, dont trust them…
What do i can say? Finally fact-checking made. Something, that should be made at start, like constant memory for saved conclusions, knowledge base, where fact can be checked or aligned.
Neural network finally become useful.
it’s far more censored than google search – it’s basically useless imo, censorship kills usefulness
I completely disagree itβs objectively far less censored than Google.
@@RyluRocky He means Google Search not Google Gemini. Google Search is not censored at all.
Google search is censored. Look for things that you know you aren’t supposed to find. Then notice how you don’t find them.
@@malditonuke I meant porn and similar legal things that are usually censored (definitely censored in all AI searches and all AI prompts except Mistral). What exactly did you mean by “what you aren’t supposed to find”. I hope not some conspiracy theory.
@@thornelderfin “Compare them with YANDEX”
Then yes google is a bit censored…
Say “Movie, Software Piracy sites” etc etc
If AI learns from human texts, why wouldn’t it hallucinate? Don’t we work with incomplete data all the time? So, that is what its “learning”.
Why should it hallucinate? It could instead say itβs very unsure and that its response is a guess thatβs likely incorrect due to incomplete data. Instead it provides a response with the same sense of certainty every time, whether itβs hallucinating or not.
@steve23063Β hi, have you met us humanz? π We do that, too!
It hallucinates all the time. For common things it’s okay, as they’re easy to answer. But ask it something just slightly non-trivial, it will always give you bullshit answers, while being certain it is not wrong (but it is wrong)
In programming it happens all the time. Ask it something for bash, a very common widespread thing, it will likely answer properly, but ask it about relatively new shell Nushell, which is 5 years old already, β it knows about its existence in overall, but completely wrong about how it’s being used, what’s code valid, and what’s not, invalid knowledge all the time.
Hallucination is a stupid confusing word.
All we do is hallucinate. That is the structure of neural networks.
The map is never the territory. It’s a computation engine.
It doesn’t HALLUCINATE it does what neural networks do.
Mind you they are very very very VERY DUMB insulated NETWORKS that are rigid and not brain plasticity like the human brain.
Hey are doing what they were designed to do. We just need more nuance in it’s own the feedback loop.
One of the biggest things that kept leaving a poor taste for me when using chatgpt – when it was super confidently wrong. Excited to see it being unsure!
sadly, RLHF will do that, it’s one of the biggest problems in AI currently (aside from stuff like, spatial reasoning)
Chatgpt still will not source from the old forums. I found a reference to what I wanted to see arounds 2003-2011 on bing.
That Dream Theater search put a big smile on my face.
Me TOO… THAT DID NOT GO UNNOTICED!
I love the subtleties in his videos.
Love love LOVE THIS CHANNEL!!! πππ
i like how subtly sarcastically mentioned about Nvidia stocks
Thank you for addressing hallucinations. I find the new search hallucinates much more than GPT-4o with web browsing from before a week ago. Insidiously, it doesn’t just halucinate titles, authors, and dates, but snippets which make the fake citations extremely inciting. There needs to be a button to push on hallucinated references that you really want to exist that will make some agent on the back end go out and write the paper as penance for lying.
Finally a competitor to Perplexity π
a bit late to the party :p
I wonder what happens to all those sites that rely on advertising for upkeep once they arenβt getting visitors anymore
As the pie gets bigger they are less likely to be found anyway…
Unless… hmmnnn π€ what if they aren’t on the first page of Google?
Sure, I wonder what will happen to all those newspapers too!
SOOOO BAD! I’ve been testing it out for a few days. It will copy and paste the exact same response several times in a row.
I even tried giving it false information and it almost always agreed with me! Even after I provided the correct information afterwards, it would still go with its first incorrect answer.
SOOOOO BAD! πππ
Same experience. But so don’t use it for that then. It’s a powerful tool.
this new feature in ChatGPT is incredible! Not just regular search, but interactive answers that dive deeper, cite sources, and adapt to complex questionsβthis could change everything! Excited to see how this reduces those pesky “hallucinations.” What a time for AI! Thanks, OpenAI!
Is there no “confident” parameters included in each answer so we know how much we can trust that? Considering we can ask “how confident” means it can actually be an extractable Information for users to acknowledge