Advanced
Dan Romero@dwr
7/18/2023

For AI folks, how big of a deal is this? Scale of 1-10? cc @pushix @theodormarcu @scharf https://twitter.com/ylecun/status/1681336284453781505

In reply to @dwr
7/18/2023

It’s yuge we were literally talking about a better AI on our all hands today!!

In reply to @dwr
Theodor Marcu@theodormarcu
7/18/2023

insanely big deal. costs are coming down very fast that being said, not as big if you know that many companies were already using llama 1 despite it being "non commercial" 🤫

In reply to @dwr
Max Miner@mxmnr
7/18/2023

Seems pretty huge with the combined 'commercial use' designation and models at 7B, 13B and 70B parameters. All available via Hugging Face etc. They're providing a meaningful alternative to OpenAI (Microsoft backed) and Anthropic (Google backed).

In reply to @dwr
m_j_r@m-j-r
7/18/2023

🌶️- GPT-4 is allegedly a MoE that can't run in open sense on consumer devices, maybe it's possible on architecture like Petals. point being, pretty much all embodied agent research depends on the emergent reasoning of that architecture vs LLMs like Llama/Orca/etc. chat apps will be more competitive, though.

In reply to @dwr
gm8xx8@gm8xx8
7/18/2023

Scale of models, performance, cost, & open sourced… Yes this is a big deal.

In reply to @dwr
James Young@jamesyoung
7/18/2023

it is more about MS posturing - OpenAI, Meta, Nvidia, GitHub. Why Azure? (roots go back Satya) https://twitter.com/alex_valaitis/status/1681348531834044426?s=46

In reply to @dwr
Nicholas Charriere@pushix
7/18/2023

Very big deal. My personal bet is that 2 years from now most people are running fine tuned llamas and openAI’s market share takes a big hit.

In reply to @dwr
7/18/2023

we can build a decentralised llm network with this right? 😳

In reply to @dwr
MxVoid@mxvoid
7/18/2023

It’s a BFD, about a 10. Open sourcing these tools (with commercial use!) lets people tinker without worrying about APIs, tokens, getting their access cut off due to unexpected downtime, etc. Allows fine-tuning for specific use cases, e.g., training it on your own codebase for a better, customized AI assistant.

In reply to @dwr
aerique@aerique
7/18/2023

Time to replace Llama v1 with v2 on my phone (if possible 😅). https://mirror.xyz/xanny.eth/TBgwcBOoP9LZC6Mf570fG8VvZWhEn_uWZPHy3axIpsI

In reply to @dwr
PhiMarHal@phimarhal
7/18/2023

Solid 9. It's not GPT3.5 tier yet, let alone 4. But it's a solid step up from previous opensource models. The potential here lies in opensource finetuning.

In reply to @dwr
Giuliano Giacaglia@giu
7/18/2023

This is pretty big news given that now OpenAI edge reduced by a fair amount

In reply to @dwr
Ben Scharfstein@scharf
7/19/2023

I think it means gpt-3.5 won’t get used as much, gpt-4 can still do things that Llama 2 can’t. It’s not *that* big a deal though because I think everyone expected this so happen soon

In reply to @dwr
Venkatesh Rao ☀️@vgr
7/19/2023

Big deal for commercialization. Lots of teams were previously using Llama for research but not product, and switching to other weaker weight sets with clean rights. This should unleash a bunch of products from limbo.

In reply to @dwr
Eric Platon@ic
7/19/2023

Stepped back on high mark. Big potential, but the rumored secret architecture changes that led to GPT-4 may well make all this deprecated early (not obsolete, but…) Meaning that to reach more appealing GPT-4 “level”, the v2 lineage may need overhaul, and may not work easily on “reasonable” hardware.

In reply to @dwr
BrightFutureGuy@bfg
7/19/2023

Unfortunately it’s as big as it gets - so 12 🫨 cos Microsoft & Zuk just made themselves more bulletproof ☹️

In reply to @dwr
Daniel Lombraña@teleyinex
7/19/2023

The catch is the hardware that you might need to run this fast and in a proper way. While os is the way to go, the catch has been always that. Google has been doing it for years with their tensor flow solution.

In reply to @dwr
j4ck • icebreaker@j4ck
7/19/2023

@web3pm

In reply to @dwr
Jack Storment@jstorment
7/19/2023

Solid 8