Advanced
Sort:
In reply to @kazi
blobs@blobs
8/29/2023

daddy, buy mee sum GPUs plz 🥺

In reply to @ace
blobs@blobs
8/27/2023

this would be the number one alternative @farcaster client today

In reply to @ace
blobs@blobs
8/27/2023

this would be the number one @farcaster client today — everyone should take notes lol

In reply to @fafa2w2
blobs@blobs
8/27/2023

someone didn’t know how to code android 😕

In reply to @0703
blobs@blobs
8/27/2023

😬, not yet

blobs@blobs
8/26/2023

congratulations to last week’s winners! 🥇 @serendipity 🥈 @ketan 🥉 @campster 🎊🎉🥳

In reply to @blobs
blobs@blobs
8/25/2023

or maybe we can stick the analyzer in https://operator.io’s langchain flow!

In reply to @blobs
blobs@blobs
8/25/2023

… but well probably need to have a django server to be able to use this in product 😕 @limone.eth let me know if you want to collab

In reply to @gabrielayuso.eth
blobs@blobs
8/25/2023

👋 im working on this with https://github.com/michaelhly/FarGlot we’ll tune a pretrained BERT model based on the farcaster corpus and expose a sentiment analyzer based on the tuned model should have something out next week!

In reply to @entropybender
blobs@blobs
8/25/2023

ha. do you validate the hashes? what’s the hallucination rate?

In reply to @jachian
blobs@blobs
8/25/2023

how do you know if your prompting ability improved? Is it actually like https://warpcast.com/pixel/0xcdf80f

blobs@blobs
8/25/2023

part 2 of the series on backpropagation, hyperparameters and evaluation metrics. writing this gave me some basic understanding of how a LLM works, instead of thinking of a language model as a complete black box. let me know if it helps you too! https://michaelhly.com/posts/tune-llm-two

In reply to @blobs
blobs@blobs
8/25/2023

for example: pretrained model was trained on X corpus, but my data set has some variance, and i want tune the model to adjust for the variance

In reply to @pixel
blobs@blobs
8/25/2023

yes. so this is the hardest part — defining the output/results you want the model to tune towards yes for generative use cases, it's hard to beat openai. i believe people usually tune for niche/task-specific use cases.

In reply to @pixel
blobs@blobs
8/25/2023

a). just exploration b). i'm not sure what you mean by limit. but hugging face has pretrained llamas that you can grab off the shelf: https://huggingface.co/meta-llama my belief is that you should tune a model to find a fit for your dataset ... otherwise you probably don't need to ...

In reply to @gabrielayuso.eth
blobs@blobs
8/25/2023

for the prompt engineering crowd, i wonder how they're measuring the quality of their prompts ...

blobs@blobs
8/24/2023

i'm figuring out how to train a LLM and documenting each step in this blog series. for anyone who is also curious, here is part 1: https://michaelhly.com/posts/train-llm-one

blobs@blobs
8/24/2023

666 is some token identifier from some language model

In reply to @ishika
blobs@blobs
8/24/2023

im learning too! writing a blog series documenting everything along the way: https://michaelhly.com/posts/train-llm-one