ChatGPT with Search, Altman AMA

The Google destroyer, the Perplexity crusher? Or just hype? ChatGPT with Search is here, and simultaneously Altman and co did an AMA on Reddit, covering GPT-5, Sora, SearchGPT and a lot more. Plus, the biggest news of them all: Simple Bench is out.

AI Insiders:

I am now available as a podcast!

Chapters:
00:00 – Introduction
00:36 – ChatGPT with Search
05:45 – Reddit Altman AMA
09:34 – Simple Bench Out

Altman AMA:

ChatGPT with Search:

Perplexity Ads:

Perplexity:

The 8 Most Controversial Terms in AI:

Non-hype Newsletter:

I use Descript to edit my videos:

Many people expense AI Insiders for work. Feel free to use the Template in the 'About Section' of my Patreon.

Joe Lilli
 

  • @Juddersbaby1 says:

    Hi algorithm, I am engaging with this video

  • @pauljones9150 says:

    2:12 “they said to expect it shortly, so expect it in 2030” 😂😂🤣

  • @calrt says:

    2:18 “over the coming months, so expect it around 2030” 😂

  • @ryzikx says:

    ive been less into ai news lately but i will never skip an aiexplained video 👍

  • @DemetriusTrumpClips says:

    The real question is, will we get AGI before GTA6 comes out!?

  • @Neomadra says:

    I feel like the answer they gave on hallucination is quite misleading. It’s rarely because source data by humans have flaws. It’s mainly because it models language and not the real world, so even with perfect flawless data a language model will hallucinate when modeling language is not the same as modeling the real world. And that happens most often with previously unseen requests and questions.

    • @antonystringfellow5152 says:

      They don’t seem to have any level of understanding. They generate content using statistically probable associations. This can make them appear intelligent because they contain so much data… more than any human brain. But then they’ll produce something really stupid or even contradict themselves. This is because they don’t understand anything in the first place, they only appear to.

    • @iau says:

      Yeah, it was a cop-out answer.

      I believe that until the fundamental architecture of these models is improved to treat facts as something inherently different that the likely next word, it will keep _adjusting_ facts to maximize output likelihood.

    • @adfaklsdjf says:

      they have no internal representation of truth.. it’s all just word association. until there’s some explicit internal representation of truth, the model doesn’t “know” what it does/doesn’t “know”, so it’s bound to hallucinate sometimes

    • @Blanksmithy123 says:

      Exactly, it’s a weasel answer…

      Even if you curated a model to train only on accurate information, it would still hallucinate.

    • @haileycollet4147 says:

      This would be rather hard to test, but I’d be very curious to see how much hallucination does drop when trained on “perfect” text, all else equal (model arch & size, # training tokens, hyperparams, etc.)

  • @jonp3674 says:

    Search is really useless without reliability so it’s really surprising they’ve gone for that.

    Imo the thing I use AI the most for is foreign language practice and they could nail that as it doesn’t really matter if the model hallucinates while you’re having a conversation with a tutor and the models ability to explain vocab and grammar and create exercises etc is excellent because language is it’s core jam.

    They could made a really amazing video avatar product which takes the whole market without much difficulty I think.

  • @JL2579 says:

    Without having tried the new search got yet, my previous experience with Ai search has been really underwhelming. When it’s simple stuff then googling is often faster, but on more advanced stuff it usually hallucinates or mixes up information horribly. And thats the only time where it would be actually useful, when it was able to quickly sift through lots of online information, filter out irrelevant stuff, wrong info and noise, and summarize the results. It’s great when it already knows facts it doesn’t need to Google.

    One example was when I wanted to know what percentage of the population in Switzerland owns guns and it repeatedly, even after making it aware of this, gave me an estimate based on dividing the number of registered weapons by the population, which yields a way too high number (and for example would yield over 100 percent for the US because there are more weapons than people) because gun owners can and often do own multiple guns, so a much smaller percentage of the population will own at least one, but of those many will have multiple. To this day I still don’t have an answer 😅

    • @thefinn0tube_ says:

      I just gave search GPT a go with your Switzerland gun ownership search and it gave a pretty adequate answer. Obviously I have no idea the accuracy of its information but it seems believable:
      “In Switzerland, it’s estimated that about 28.6% of households have firearms, with 10.3% specifically owning handguns.
      WIKIPEDIA
      This means roughly one in four Swiss households is armed. However, since individuals can own multiple firearms, the actual percentage of the population owning guns might be lower.

      The country has a high rate of gun ownership, with estimates ranging from 27.6 to 54.5 firearms per 100 residents… etc”

    • @Picteon says:

      My experience with search is that it completely forgots what my question was and just repeats the results to me even if completely unrelated with zero critical thinking

    • @ZeerakImran says:

      @@Picteoni used to get that but no longer have those issues after getting my custom instructions right. It also now no longer just gives me mainstream info from obviously biased sources like the bbc or any mainstream company which has an interest in the topic. i have noticed it being really good now. It used to be useless but now just straight up is quite honest with me and admits when something sucks. Its no longer polite and quite critical now which is great. Its not providing generic maybe this maybe that non-sense. I did specifically instruct it against maybe’s and “could”… i may be able to fly. I could be a dalek… i don’t want these words they mean nothing. So i instructed it to work around it and to be more concise, careful and have integrity and honesty as a core part of its values. It now tells me “it is possible (likely) that…” or “it is unlikely (possible)…” or “ it is possible (unlikely)…”. That’s how i instructed it to tell me instead. I also have it not give me generic nonsense information that doesn’t apply. If i asked for this, I meant this. If you’re not sure, ask me for more information. Check online. Don’t provide any information that isn’t relevant unless you deem it to be valuable. The downside of my instructions is that they work really well for ME! These instructions will make it worse for a lot of people. Since when I’m using it, I’m very careful with my words and have also instructed it to take what I’m saying to mean exactly what I said instead of correcting it by assuming. If i have said something that doesn’t make 100% sense, then bring it up. Don’t assume or correct anything because i can’t be wrong. This won’t be helpful for others who wouldn’t write or speak in a very careful way. If you have experience with programming, it’ll work great. If you’re a very technical person or an engineer, this will work really well with you. If i tell it something wrong, i want to get a wrong answer that is right for what i said. Rather than it assuming what i meant and changing it. Just as your programming code won’t change itself to correct the error.

    • @jan.tichavsky says:

      ​@@ZeerakImran since when is BBC biased source? if anything the alternative media come with a large bias because they are trying to push a specific agenda, often with one or couple large funding sources (undisclosed of course).

  • @kingdavid9422 says:

    Thank you, AI Explained on the update of SearchGPT.
    Something to note here: Many people in the comment are commending SearchGPT (or is it ChatGPT with search, like you emphasised in your video), especially its user interface. However, we should all remember that OpenAI is desperately in need of funds for further future advancement of their technological products. So, don’t be surprised when the interface of ChatGPT with search gradually starts to change and begins to clutter with sponsored ads. I hope that will not be the case in the future. Time will tell.

  • @runvnc208 says:

    I don’t think anyone is going to read this, but it seems obvious to me that a large model that has very good video understanding, especially trained on something like random physics scenarios and questions about them, with good grounding of the language with video representations, _will_ be able to crush Simple Bench. It just needs a spatial-temporal and Q&A dataset, maybe with everything in the same latent space or at least really effectively linked spaces, that is aimed at his types of questions. Maybe a general purpose multimodal model that just has a lot of physics experiments, juggling, toddlers playing with blocks, etc. Maybe something like Sora but with a Q&A capability somehow.

    I think this is going to be a lot easier when we get another 10 or 50 times increase in efficiency or scaling to more easily handle models training and inferencing on video and video transcripts etc. that can use significantly more RAM. Regardless, I think we could easily see an 82% AI score on Simple Bench within the next year, 2 years at most.

    • @chrisanderson7820 says:

      When people talk about us running out of training data I think they forget the sheer volume of 3D spatial information yet to be put into these models. The amount that neural nets will learn from embodiment and being “out n about” via video feeds is immense. Stuff like Nvidia labs etc is just the start.

  • @qtbmrs says:

    Our team spent months building a rag thats using web scraped content as knowledge base. I just tried chatgpt search, its as good as what we built😢.

  • @schmutz06 says:

    This is impressively hot out the oven! I followed it all and wanted to see a breakdown of the AMA and who better than everyone’s top AI explainer. Perfect stuff.

  • @Шестик-ю1ц says:

    Got recommendations of this channel almost constantly by a friend for about half a year, and was reluctant about it. But it took just one video… and here I am, waiting impatiently for each new one

  • @dcgamer1027 says:

    I just think its so cool that you saw a problem with the testing data set and rather than just complain about it you made your own, and one that seems genuinely useful too.
    Idk I just love that, I love seeing people try to fix problems they see rather than just let them sit there.
    Nice

  • @sagetmaster4 says:

    I think just having additional administrative layers on top will solve the hallucination problem. Big companies just haven’t been working on these architectures because the model itself has been improving so fast it hasn’t been necessary

  • @jeremycronic says:

    Dude that juggler ladder question is crazy. I tried a few simple bench questions before u covered it in this video. I was SO confused with that question. I was like ‘the ball is on the ground’ but that’s not an answer. I got it right after re-reading all the answers a few times, but damn that would be a wild kind of question for llm to figure out. Well done with those questions.

  • @claudioagmfilho says:

    🇧🇷🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Your videos are absolutely essential for the AI community! I seriously can’t go a day without watching them as they get released. They’re always so concise and straight to the point, something new every time no one is thinking of but you. The way you break things down is incredible, it feels like watching a movie, way beyond what anyone else is doing on their videos on these models and all. Thanks for sharing all this with us and making complex AI topics so engaging…

  • @mrmooshon5858 says:

    I’ve just realized I rarely take the time to thank you for you work, you are, 100%, the best AI related channel on you YouTube and everytime I see you’ve posted I go “babe wake up, a new AI Explained video just dropped”

  • @anonymes2884 says:

    Simple Bench is a really interesting approach, I like how easy it is to stay ahead of the training set (we as humans can come up with novel “puzzles” along those lines basically ad infinitum). Not sure it’s measuring _usefulness_ (in the commercial sense) as well as some benchmarks but it seems much closer to measuring real-world _reasoning_ than most.

    Also appreciate that the paper’s 8 pages BTW – a lot of AI papers seem to be in the 60+ pages range and I don’t have time to do more than skim them (and watch your summaries of course :).

  • @AIForHumansShow says:

    of course you have your own benchmark now — love this and pls just keep making these

  • >