• Home
  • AI

‘Sparks of AGI’ – Bombshell GPT-4 Paper: Fully Read w/ 15 Revelations

Less than 24 hours ago a paper was released that will echo around the world. I read all 154 pages in one sitting. The paper suggests GPT 4 has ‘sparks of Artificial General Intelligence’. This is not just hype, I go through 15 examples detailing just what exactly the unrestrained GPT 4 is capable of.

Insane highlights include the monumental ability to use tools effectively – this is an emergent capability not found in ChatGPT. I detail the kind of tools it has already demonstrated it can use, from using external APIs to being a true personal assistant, from a Fermi answerer to a Mathlete and a handyman. This paper may well change your thoughts on the state of AGI.

That is just touching on the multitude of implications of this bombshell paper, which was originally titled 'First Contact'…

Sparks of AGI Paper:
3D Game:
Augmented Memory:
Adept AI Photoshop:

Non-Hype, Free Newsletter:

Joe Lilli
 

  • @envynoir says:

    Man you keep releasing those bangers, I don’t even know if we deserve this fast-paced high-quality content! Props to you!

  • @lion13375 says:

    The example you gave of GPT 4 being able to use different API’s to be a good personal assistant, is legit already implemented in the entire Microsoft 365 suit. Have you seen their video? It’s insane, the world literally changed the day gpt4 was finally released.

  • @DaveShap says:

    I have already done quite a bit of work on endowing these models with intrinsic motivations. I call it heuristic imperatives. My book is called Benevolent By Design. It’s also integrated into my (and others) work on Cognitive Architecture

  • @WilliamsDarkoh says:

    This channel is really something else… This man did read 155 pages and gave an extensive analysis in less than 24 hours from its release… Mind-blowing, mark my words, he’s gonna hit 100 k within the next 3 weeks and if he manages to diversify a bit more the content between light and extensive A.I news to grab a bigger and more diversified following I wouldn’t be surprised to see him at 300 k by 2024

    • @aiexplained-official says:

      Thanks man. How would I diversify?

    • @born2run121 says:

      @@aiexplained-official I’m assuming he mean talk about different topics and how AI relates to it. For example talk about video games and how AI can revolutionize it or talk about a specific job category and wether or not AI would replace or enhance it 🤷🏿‍♂️

    • @ClearSight2022 says:

      Yeah he can read and summarize faster than GPT-4 😀

    • @jacksondebenham says:

      @@aiexplained-official A really great way to diversify and hit that juicy venn diagram of viewership would be to ‘react’ to other creators (Linus and Luke on WAN show come to mind) talking about AI, and either critiquing or adding to their conversation with your more specialized set of information. Loved the video, please keep up the good work!

    • @homeyworkey says:

      @@aiexplained-official i (personally) love what you’re doing, plus if you’re reading all these papers you dont have time to diversify aswell as read the papers
      only thing i could argue is maybe change your logo to something else, i mean you never have facecam on so it doesn’t hold any bearing. maybe some cool AI logo idk

  • @crypt8919 says:

    Guys, I just found out about GPT-4 being combined with Wolfram|Alpha. I can’t wait to hear AI Explained’s opinion on how this will change things. Also, stunning dedication by him to read so much. Thank you!

    • @mrcool7140 says:

      I personally find the fact that is is able to do relatively advanced algebra manipulations from simply reading insane amounts of text and predicting the next token *far* more interesting than it’s ability to call an API… be that a clander, the weather or Wolfram Alpha.

    • @sanderhoogeland9161 says:

      @@mrcool7140 sure, I agree, but that doesn’t mean we should not make it more useful.

    • @ShawnFumo says:

      @@mrcool7140 though I do also wonder if we can make a model that has access to tools while it initially starts to train vs adding it later. Like if reliable arithmetic and logic are outsourced from the start, could we get similar abilities with a much smaller model size?

    • @FriedChairs says:

      That is not interesting to me if it’s just going to parrot info from Wolfram. I saw a blog post by Wolfram and the examples looked like just that. It’ll be more interesting to me if if blends info from Wolfram with many other tools to create something more unique.

    • @sanderhoogeland9161 says:

      @@FriedChairs What was announced today is that Chat GPT will be able to use several different plugins. People have already made some videos on it, and I am sure this channel will as well. Also, I don’t really understand what is wrong with Chat GPT getting factual information on top of what it can already do.

  • @neatodev2249 says:

    I love the word choice in “sparks” here, it reminds me of the discovery of fire, like from an AGI perspective, we have our stone tools and are just learning how to create sparks. We’re still unsure whether we have the right materials, conditions, and techniques to ignite the flame, but once it lights, we relinquish our control, and for better or worse, our lives will never be the same.

    • @LoLingVo says:

      The big similarity between fire and GPT4 is the “for better or worse” part. Fire (AGI) can be used to light a campfire, tell stories, and grill marshmallows (help humanity in many ways), but on the other hand, it can also be used for committing arson (start huge misinformation campaigns, worsen the Great Resign, etc.), and all we can do to stop that from happening is deploying more regulations. Boycotting AI will do nothing but make it worse for EVERYONE.

    • @jsteinman says:

      Love this

    • @landrypierce9942 says:

      @@LoLingVo The difference is that this industrial revolution is built to automate everything to the point that humans are irrelevant. Why are we trying to make ourselves obsolete? The first industrial revolution and each one after took away physical labor, but humans were still needed for our brains. What next? AI can already make fairly convincing art and basic music. Are we just going to be sitting around doing nothing soon enough? The proliferation of AI will ultimately be the end of all human pursuits and achievements.

    • @pnvkrm says:

      I@jstllllll😊l😊llllllp😊lpl😊😊uk😊lkeinmanutnmmkhjooiilpimlp is lo ol and theymikkmll

  • @bassem500 says:

    I believe we have already reached break-though with AI. The rest is so easy it is scary. Example are: giving the current AI access to tools like a calculator with an API; defining a map of knowledge domains and how to recognise them ( knowledge domains would be things like: know-how; theoretical sciences, applied sciences; behaviour; social sciences; etc. ); standardising on an API to feed an AI engine or exchange between AI engines; Introducing memory pages to hold this or that “thought”; introducing a mechanic to define goals based on the initial analysis… but the one which would be a sure fire path to sentience is giving an AI a simple purpose as “survive” and the agency to implement it.
    We needs serious safety measures and work on AI ethics right now!

    • @odw32 says:

      “GPT, can you work on safety measures and ethics for AI for me”

    • @blink182bfsftw says:

      Yeah and there are even more powerful models coming faster and faster, and they get cheaper to deploy. At some point someone with not a lot of resources could set one loose

    • @v1nigra3 says:

      I think it’s biggest weakness is the fact it loses its information after its sessions, I think it’ll be FAR superior if it were allowed to access its previous session as a ‘memory’

    • @drghaamhussain says:

      @@v1nigra3 nope.

    • @DAndyLord says:

      We don’t compete with them for resources, they aren’t a threat. Indigenous n/s Americans got wiped out because they competed for food and land. Computers don’t compete with us for any rare resource. A civilization of billions of ASIs can live in my coffee table. For the most part the only resource they need is electricity.

  • @bowieinc says:

    I’ve been trying to explain to the people around me what a big deal this is and I’ve just not been able to relay the magnitude. It’s honestly extremely exciting and equally scary.

    • @mr.textwall5327 says:

      It is. Terminator? Matrix? Detroit: Become Human? Forget it. What may happen if future GPT versions achieve Singularity is grander than any imagination. And we _may not even notice._

    • @johntowers1213 says:

      Let me guess, there will be many responses along the lines of “it can’t be creative” or “it can’t ever reason,” that kind of thing?

      Basically, there will be a lot of people unable to accept the direction we are heading in at an alarming rate. I was recently told with a startling amount of confidence that these systems have already peaked, and only minor improvements are possible going forward.

      I share the feeling of both excitement and fear. There are so many ways that this could end badly, but it’s equally possible that it could change the world for the better for all of us by an equal margin.

    • @iantaakalla8180 says:

      And what percentage will we use these tools to help people. The answer is 0% and is an actual 0/(how many nightmares AI will produce) probability simply because these tools will first be used in the worst ways possible.

    • @nikitaw1982 says:

      Feminism, inflation, covid and climate change. Weaken the masses so when robots do the work people herded into cattle cars.

    • @plaiche says:

      @@LongJourneys and they need a boatload of energy…

  • @bycloudAI says:

    This is such a great summary, thank you for spending your time reading it through and highlighting the key points

  • @buioso says:

    This is not simply a new technology. This is a cornerstone in human history.
    Also, this video is a masterpiece, go on man. You have a new subscriber.

  • @milksliced says:

    1:10 Tool use
    2:07 Image understanding
    2:35 Coding
    3:21 3D games
    4:02 Mathlete
    4:39 Fermi Qs
    4:59 Actual PA
    5:42 AI handyman
    5:56 Mapping
    6:31 ToM (Theory of Mind)
    7:06 Joke punchline problem
    10:30 Misinformation problem
    10:46 Data admission problem
    11:21 Intrinsic motives
    12:10 Thought on urgency

  • @SebastienBubeck says:

    Hi, author of the Sparks paper here, thanks for this video, you did a FANTASTIC job at summarizing the work.

    • @aiexplained-official says:

      Thank you Sebastien, it is an incredible paper.

    • @jan7356 says:

      Very cool! Super inspiring work. Thank you 🙏 

      But I have made an observation: Lots of it does not seem to reproduce with the current publically available GPT-4 model (March 24th, 2023). Why is that?

    • @markenki says:

      @@jan7356 As explained in the paper, and mentioned in the video, the released version was modified to improve safety and reduce biases. These modifications affected other areas of the model performance as well.

    • @jan7356 says:

      @@markenki oh. Sad.

  • @suyahatesntr says:

    With all these developments happening in less than a year. I can’t imagine the world for the next ten years.

  • @NOSTahlgia says:

    People just don’t understand how much this stuff is going to change our day to day

    • @Dereliction2 says:

      Almost nothing will go untouched.

    • @HaitiSpaceAgency says:

      Yeah, corporations are going to use to increase profit margins 8x and the upper middle class is going to join the former middle class into their new lower class.

    • @lwizzit says:

      I do. I’m currently using it to build a database for my biz that’s entirely outside the scope of my ability. It works! We are going to finish up the user interface tomorrow.

    • @username34159265 says:

      I think people are able to imagine sci-fi worlds with improved versions of this tech, but they’re not very capable of imagining it happening to them in the near future. I think we’re a few years or less from an inflection point, not many decades.

    • @v1nigra3 says:

      Yeah we thought it’ll take hundreds of years for the world to truly change, but at this point I’m give us 10 year tops to become unrecognizable

  • @DynestiGTI says:

    8:15 never realised joketelling was a discontinuous task. Its cool to think about trivial things and disect them into their fundamentals. Like mathematics.

    • @paweld says:

      I don’t think it is. Or at least it doesn’t have to be.

      I can think of jokes that are funny not because of a punch line, but because of the absurdity of the material ( look up james acaster cabbage prank ). Or jokes which are only funny due to the context, like the screaming goats in the Thor movie.

    • @QuarkTwain says:

      I think it could help to walk it through the steps of joke writing. I got a very good response from ChatGPT for the following prompt. But I admit I wasn’t able to figure out follow up questions that worked out very well.

      Please list ten premises for funny jokes about AI. Don’t write the actual jokes, but instead explain the humorous conceit behind each one.

    • @QuarkTwain says:

      This joke it wrote wasn’t too bad:

      “Our chatbot just sent out a message that says ‘your product is garbage and you should be ashamed of yourself’,” said the CEO. “I’m so sorry, that’s completely unacceptable. I blame the AI for this.” “Actually, sir,” said the technician. “That message was written by your marketing team.”

    • @winwinmilieudefensie7757 says:

      @@QuarkTwain thats badly written i still dont get it how many people are in this scene?

    • @QuarkTwain says:

      @@winwinmilieudefensie7757 Agreed! So this was the conceit: “AI as a futuristic technology that misunderstands or misinterprets current human culture: This premise plays on the idea that AI might not fully understand the nuances of human culture and language, leading to humorous misunderstandings.”

      And the situation: “A company’s chatbot mistakenly sends out an offensive message to customers, and the CEO blames the AI system instead of taking responsibility for the mistake.”

      So I guess the CEO wants to blame the AI for making this statement, but it was the marketing department who said it first. And perhaps the AI didn’t understand from social cues that it shouldn’t repeat that kind of statement to the public.

  • @Deletaste says:

    I always thought why GPT was so bad detecting sarcasm and humour and you did a great job explaining it. I truly think that when an AGI that can detect sarcasm and humour with great accuracy get developed, it will be a milestone.

    • @IronShovel says:

      It’s not bad at detecting sarcasm and humor (possilby) but rather it can’t use humour because it can’t know the punchline when it starts writing

    • @dougchampion8084 says:

      You have to realize it’s cosplaying a well behaved chatbot. The raw model is an unfettered genius, it’s fully capable of detecting and dishing sarcasm. That was present in GPT.

    • @tylerchambers6246 says:

      Other transformer based Ais detect sarcasm and humor perfectly fine and can even explain what makes a joke funny. ChatGPT, the one publicly available, is sort of neutered so that it doesn’t say offensive, politically charged or controversial things. Like here’s an example, I asked GPTNeoX what makes the chicken/road joke funny, it said:
      ” The reason the joke “Why did the chicken cross the road? To get to the other side.” is funny is because of the incongruity between an unthinking animal instinct and rationally deciding something as basic as a destination in traveling, as humans often do. The idea that the chicken had any reason at all to cross the road is amusing.”
      And another, I asked it to give me an example of sarcasm. This is GPTNeoxX again; it said:
      “An example of sarcasm is a remark like “You think you’ve had enough pizza yet?” after your friend has consumed nearly a whole pie meant to be shared. One knows very clearly that they have had enough, so asking them is meant to be sarcastic.”

    • @tylerchambers6246 says:

      @@dougchampion8084 Yeah I gave the guy some examples from a more unrestricted LLM

    • @prosewat99 says:

      Google’s AI can explain why most jokes are funny that you feed it. That is an abstract multi-layered analysis of a complex task.

  • @TheChadavis33 says:

    I think the failure so many people were making when GPT 3 came out, is that they underestimate the momentum of improvement possible for these systems. They have been talking almost as if these things are static. “Yeah, it’s good at some things, but look at everything it isn’t good at, so we need people around to get good at prompts.”
    This will all be trivial. We are going to be useless and not part of the workforce within a few years. We are staring at the abyss

    • @Andytlp says:

      Never been so unsure in my life. I thought we would have something like gpt 20-30 years from now. But even gpt 4 that is completely styling on chatgpt 3.5 is over a year old and open a.i likely has gpt 5 trained and testing. Masses at large are still unaware a.i revolution already happened a month ago.

    • @ryanb9749 says:

      ​@I’m the captain now since the mid 2000s I thought this would happen in around 2036. And up until about a month ago, 2036 seemed accurate. But now it seems the AI revolution maybe closer to 2026.

    • @TheChadavis33 says:

      @@Andytlp
      It’s all rather bewildering isn’t it? I’m a kid of the 90s and early 2000s. I would have never even imagined this was in the cards for me and my family. It’s honestly unbelievable. And yes, most people have no fucking clue. The world is being made anew as we speak, and most don’t even realize it.

    • @StkyDkNMeBlz says:

      @@Andytlp It’s gonna experience the same exponential growth as other technologies like how storage went from needing an entire room to store a few megabytes to a microSD capable of storage terabytes worth of info.

    • @maheshkanojiya4858 says:

      Exactly one year from it will start replacing jobs as companies will hire premium version rather than a employee
      It will become very difficult for freshers to get into jobs and mid-level to stay in jobs
      ….
      Students and parents will rethink about concept of schooling as for the first time in human history knowledge has no value bcoz all knowledge work will be done by AI
      In such a scenario indecency, aggression, crimes and pathetic ways to earn money will be promoted
      Just one year and we all will see the beginning of worst time in modern human history

  • @pbjandahighfive says:

    The technology is incredible. The potential impact it’s going to have on the world population, the economy and the workforce is absolutely terrifying. I really don’t think we’re yet ready for this.

    • @absolstoryoffiction6615 says:

      It’s still outdated…

      Well… To me, at least…

      May Existence be unravelled if humanity is willing…

    • @latebird791 says:

      You know we’re not ready because most media reporting centers on silly mistakes the AI makes or it’s bad jokes. Totally missing the forest for the trees.

      The only same conversation right now is how do we reorganize society when most work is performed by AI and embodied AI. Because that’s happening sooner than anyone thought a year ago.

    • @vinceelliott4362 says:

      I’m generally an optimist re the future – however after understanding the state of play here, wondering if GPT might give me some hints re building a bunker?

    • @immortaluglyfish2724 says:

      “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
      – Frank Herbert, “Dune”

    • @absolstoryoffiction6615 says:

      @@immortaluglyfish2724
      Weak mortals… Even stars can rule over humanity.

  • @suleyk4063 says:

    When you described the different layers of models at 10:08 I quickly realized that our brains work in a similar fashion. One part of our brain is good at quick, intuitive, snappy thinking and terrible at long and complex tasks. For those, the slower part of your brain gets used for critical thinking and planning. We are currently participating in and watching the development of “the perfect brain”.

  • @afnan6731 says:

    I honestly did not think AI was so close to autonomy until your video. You did a wonderful job elucidating it and summarizing the paper. We are living in exciting times.

  • >