this would be the number one @farcaster client today — everyone should take notes lol
congratulations to last week’s winners! 🥇 @serendipity 🥈 @ketan 🥉 @campster 🎊🎉🥳
or maybe we can stick the analyzer in https://operator.io’s langchain flow!
… but well probably need to have a django server to be able to use this in product 😕 @limone.eth let me know if you want to collab
👋 im working on this with https://github.com/michaelhly/FarGlot we’ll tune a pretrained BERT model based on the farcaster corpus and expose a sentiment analyzer based on the tuned model should have something out next week!
how do you know if your prompting ability improved? Is it actually like https://warpcast.com/pixel/0xcdf80f
part 2 of the series on backpropagation, hyperparameters and evaluation metrics. writing this gave me some basic understanding of how a LLM works, instead of thinking of a language model as a complete black box. let me know if it helps you too! https://michaelhly.com/posts/tune-llm-two
for example: pretrained model was trained on X corpus, but my data set has some variance, and i want tune the model to adjust for the variance
yes. so this is the hardest part — defining the output/results you want the model to tune towards yes for generative use cases, it's hard to beat openai. i believe people usually tune for niche/task-specific use cases.
a). just exploration b). i'm not sure what you mean by limit. but hugging face has pretrained llamas that you can grab off the shelf: https://huggingface.co/meta-llama my belief is that you should tune a model to find a fit for your dataset ... otherwise you probably don't need to ...
oops. if anyone else hits the 404 https://michaelhly.com/posts/tune-llm-one
for the prompt engineering crowd, i wonder how they're measuring the quality of their prompts ...
i'm figuring out how to train a LLM and documenting each step in this blog series. for anyone who is also curious, here is part 1: https://michaelhly.com/posts/train-llm-one
im learning too! writing a blog series documenting everything along the way: https://michaelhly.com/posts/train-llm-one
i described the issue i ran into here: https://github.com/farcasterxyz/hub-monorepo/issues/1250