OpenAI’s New ChatGPT: 7 Incredible Capabilities!
❤️ Check out Lambda here and sign up for their GPU Cloud:
Play the Tron game:
Sources:
📝 My paper on simulations that look almost like reality is available for free here:
Or this is the orig. Nature Physics link with clickable citations:
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky,, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here:
My research:
X/Twitter:
Thumbnail design: Felícia Zsolnai-Fehér –
An AI writing another AI? 🤯
Two more papers down the line, AI papers will write new AI papers 🤣
We already are AI’s this is just reinviting the wheel, because reality is not as complex as people think, reality is just a box + orgasm
check out sakana ai scientist
🙏🏻
What a time to be holding to our papers!
But thats not all! Just imagine where we will be TWO MORE PAPERS DOWN THE LINE!
Looks like the future of Minecraft civilization simulations
at last
“People won’t care much about AI until it affects their lives”
100% you are right ✅
Todays times only for
developers and businessman
The simple ones will learn by 2027
As a dev or system admin it’s pretty good right now, o1 lack of Internet and latest versions is sad but it’s still so incredibly reliable, very little you need to fix most of the time. If you know what your doing it’s probably at least good for boilerplate and test generation.
I like that almost half of the video is funny strawberry images…
this is INSANE!!!!!! i love your content
I skipped a heart beat thinking ChatGpt 7 was already out
same but I skipped 2 heart beats
@@anubhavmondal840 i skipped 3 heartbeats
How do you know the snake game it made up wasn’t already in the training set one way or another?
He doesn’t. He’s just a grifter.
Hahaha, clearly you don’t understand how llm’s train, the model doesn’t “remember” specific code snippets or games from its training, as it doesn’t have direct access to the data it was trained on after training is complete. Instead, it generates new outputs based on patterns it learned during training. You can just ask it to change something in the snake game and if it succeeds than It’s not just a copy and paste. PS. all humans stand on shoulders of giants, we all copy others work, to some degree.
@@Noname-km3zx It has access to the internet and can find, copy and adapt existing code already published.
@@Noname-km3zx While this is a misconception a lot of people have, the comment your responding to does not strongly suggest it’s the case here.
While LLMs do not have any access to their training data from which to pull information/code snippets/etc, they are still much better (typically) at tasks within their training set than they are at tasks not within it. This can cause other problems, such as overfitting, where an AI can be really good at the training tasks, but generalise poorly to anything else.
So, wondering if something is in the training set to judge an LLM’s abilities by is entirely legitimate.
The physicist is kyle kabasares. He has many videos of 01 preview solving phd level phyiscs problems. Even correcting him on questions he got wrong and gives him the correct answer.
What a time to be alive indeed.
I just brushed up my resume in a few copy and paste minutes with ChatGPT, just like that it’s done and already submitted to three different companies. 🙌 It’s a great time to be alive! God bless to all who are reading this! 🙏
MattVidPro cameo-ing in a Two Minute Papers video feels funny and surprising.
I took a Biology course where ChatGPT was provided as part of a required textbook. It had substantial difficulty answering rudimentary questions where I asked about additional context. Additionally, it became mostly incoherent after I pressed it a bit. To be fair, I was looking for responses that didn’t repeat the textbook nearly verbatim (I did, in fact, read the textbook), and I think that line of questioning opposed the instructions given to it, which might indicate an error by the textbook publisher.
I’m optimistic that this technology will improve, but I’d be interested in metrics to test how well it can teach information, and how internally consistent it can remain.
It being a really good search function is already quite nice
And it seems like researchers are still not done with adding more layers and steps onto the techniques.
I love how everyone is calling it ChatGPT or GPT o1 even though it’s actually a new line of models seperate from gpt, and this first model is called OpenAI o1
I hate it, I clicked on this video thinking there was a ChatGPT update but it’s just an o1 showcase.
It’s the same GPT model fine-tuned on tons of synthetic data
Here is my question. If a Tron game was not anywhere in the training data (but I assume it is a popular programming exercise so many versions of the game exist on the internet) would the AI be able to construct it out of a prompt? Personally I want to separate the hype driven by quality training data, with the reality of what an AI agent can produce based on pure reasoning.
Try it! In my experience, yes.
i am really happy to see that the simulated human bodies at the end by nvidia had toes! im looking forward to the video about it.
You are just showing problems that are in the training data. It’s parroting answers and isn’t intelligent at all. Predicting the answers to an IQ test, after being trained on those answers, is not impressive.
This is astonishing; it created a perfect recreation of my computer vision NBA referee model, it also recreated my open CV gym workout app better than I had done it, and it added its ui enhancements and improvements.
What a time to be alive!!!
I’m thinking that conflating the ability of an AI to reach and reproduce the right training data, with AI’s ability to reason and produce new data, will get millions of people very disappointed in the near future when the AI is asked to reason and won’t be able to live up to the hype.
Reproducing a methodology in a paper is not PhD level.
The whole point of a methodology is to be a recipe to reproduce what you did