Wow, World-Class AI For Free, For Everyone!
❤️ Check out Lambda here and sign up for their GPU Cloud:
📝 The paper "The Llama 3 Herd of Models" is available here:
Try it out:
1. (I think US only)
2. (This should work everywhere, make sure to choose the appropriate 405b model with the gear icon)
3. I think you can try it here too:
If you find other places where it can be run for free, please let me know in the comments below, I'll try to update the list here with it!
📝 My paper on simulations that look almost like reality is available for free here:
Or this is the orig. Nature Physics link with clickable citations:
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Loyal Alchemist, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here:
My research:
X/Twitter:
Thumbnail design: Felícia Zsolnai-Fehér –
What a time to be alive!🎉
Now i just need to figure out how to get my hands on 800gb of vram, then it’ll be free! Oh, and a solar farm.
Solar farm ain’t enough (too slow), need nuclear fusion
@@Nulley0that’s not true, if you have enough panels you could do it
@@nicholasbutler2365 then you’d block out the sun for the entire planet
@@theshapeshifter0330 the power of sun, in the palm of my hands!
Excellent and brilliant ✨️
Im running the 8b parameter model locally and it is extremely impressive how good llama 3.1 is. Its definitely my new main LLM for the time being
What’s your computer specs?
What does that mean running it locally?
@@neighbor9672you can run these open source models locally on your computer using Ollamma. But only the smaller size models depending on the specs of your pc.
@@neighbor9672 The 8b model is small enough that you can load the model and use it to generate text on most consumer hardware, only needing around 8GB of memory between VRAM and RAM (with a low context size like 2k – even the 8b needs like 90GB of memory at the max 128k(!) context). If the model is quantized, you may be able to squeeze the model into 6GB or even 4GB of memory, although quality would definitely suffer. It means you own the entire process – you can get a open source UI to interact with the model like any other chatbot, and tweak basically every generation setting to your liking.
The larger Llama models are simply massive in comparison. Most people can only run them by renting time on a workstation GPU or GPU cluster that lives in a server farm somewhere. In that case you have to send your data out to somebody else that owns the hardware.
@@neighbor9672running on your own computer without internet.
It’s very interesting we’re at this point where it’s just below top expert level like in math and biology
Top level expert is 1%, which means it’s better than 99% of the rest.
I wonder where we will be in 5 years.
Are we at that point? At least for my university beginner math courses the free chat gpt makes a lot of mistakes in math.
Not available in Slovakia.
Seems like an open AI.
I see what you did there 😏
And also dangerous. These cutting edge AI models probably should not be open, as they can be easily used for bad things.
@@ondrazposukie 🙄 Oh no, think of the children!
@@ondrazposukie they are already being used for bad things, more public usage/understand combats that
@@ondrazposukie Ah yes because the billion dollar companies can do no harm.
thank you for reading the 92 pages Dr. 🙏
I think Robloxs assistent AI uses metas llama AI
Only issue I have is that meta ai keeps saying it’s not supported in my country. First time I’ve had this issue since I live in Puerto Rico (US)!
That’s crazy. Firstly, I don’t support regional gatekeeping anywhere regarding things online, but even in that unfortunate case, Puerto Rico should have access to everything that the 50 states and DC have access to… That’s not fair… Hopefully you can get access to it soon!
Still prefer Mistral Nemo 12B over this. Same 128K context and uncensored.
What UI are you running it on ?
Links are in the description
2:33 Look at the leftmost point on the graph. The AI is so intelligent it can understand what -1 sugar content means. XD
>405B
Oh boy I can’t wait to run this on my own with my 16 RTX 4090s
Thank you for this wonderful breakdown and resource. Time to go try it! 😎
I want to use this (or any reasonably enough functioning LLM) to finish my project of backpropable transfer curves (amongst my many mad science projects). Hoping it will help making GAI even easier and improve the quality of current modeling techniques.
🤫👍
I want to run a GPT offline, I have a 4090RTX and 128GB of ram. What model would perform the best?
Llama 3.1 70b
OpenZuckerberg vs ClosedAI
Its a clown world but let them surprise us.
I LOVE YOUU❤❤, great video!
insane that my pc can teach me things now offline