Advanced
Greg@greg
11/7/2023

I haven't been following the AI space very closely, especially outside of OpenAI stuff. How far behind are open source alternatives? I've heard about stuff like llama2 supposedly being decent for chat but I'm assuming the agents/function calling and whatnot is a different story?

AI
In reply to @greg
11/7/2023

none of it is really production-level. then again, GPT-4-Turbo's 128k context is no surprise, given Yarn-Mistral has tenuous parity already. really a question of capital cost, since there's a wide variety of performance boosts that can get folded into one model within a stack like Autogen + MemGPT + Voyager.

AI
In reply to @greg
Neokry@neokry
11/7/2023

agents and function calling are external to the LLM I think. you can have a library like langchain do things like this for you and pick model to use as the “backend”

AI
In reply to @greg
Warpmaster General@my
11/7/2023

It's largely a question of UX more than capability. ChatGPT(+API) is turnkey, cheap, convenient. If you want to roll your own via LLaMA, Langchain, et al, you're looking down the barrel of a week or two just to get it running, let alone fine tuning it to a level where its output is actually usable (not even useful).

AI
In reply to @greg
Katsuya@kn
11/7/2023

It depends on how you measure it/use cases, e.g. some OS models claim they are better than GPT-4 in some benchmark. But in general, my intuition is that OpenAI is ~1 year ahead than OS models. My building approach is: Just go with OpenAI if possible then optimize with other options if absolutely need to.

AI
In reply to @greg
Gabriel Ayuso — web/acc@gabrielayuso.eth
11/7/2023

API providers (should) spend considerable resources in model quality and tuning so when you use such APIs you get the benefits of all of that. If you just use an open source model you'll need to do more work on your own to get it to do what you want and do a lot of output sanitization yourself.

AI
In reply to @greg
Minh Do@minh
11/7/2023

This is where benchmarks like huggingface’s leaderboard come in handy: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

AI
In reply to @greg
Proton@
11/7/2023

Just try!

AI
In reply to @greg
nat.eth@nat
11/7/2023

OpenAI functions and native multimodal support are going to be big drivers of lock in for many orgs.

AI
In reply to @greg
Jason Goldberg @betashop.eth
11/7/2023

will let you know very soon from airstack perspective we've been working on a poc with llama2 fine-tuned to our use case, to see if it can outperform gpt4 by end of this week will let you know!

AI