Advanced
In reply to @samantha
2/1/2024

My q to the philosophy channel is how do you build an uncensored social platform at scale when there is harm such as CP, sextortion, scams, bots etc. In this context harm is subjective. I don’t mean harm = illegal. CP is illegal, sending 1000 $DEGEN and getting 10k back is a scam but not illegal. They are both bad.

Philosophy
In reply to @samantha
2/1/2024

You've tipped Degen! Your daily $DEGEN tip allowance is now 10110. Degen isn't just a token – it's the heart of a vibrant community of crypto enthusiasts, developers, and artists. Thanks for playing a crucial role in Airdrop 2!

Philosophy
In reply to @samantha
gökhan 🧬💾🚀@gokhan
2/1/2024

currently, my, or our in the future, only solution for spam posts is a simple guideline to cut off the noise. i myself keep getting bullyish DCs from people i kindly try to showcase how to make the best of the channel without patronizing. reading this in the channel specifics but i get the general gist, & been thinking

Philosophy
In reply to @samantha
netop://ウエハ@netopwibby.eth
2/1/2024

I don’t think this is possible. Even if you empower users with the most comprehensive anti-harassment tools, there’s still the matter of keeping them on rails. That is, have strong defaults while also letting them know periodically to check their settings. Normies aren’t delving into settings though…we are.

Philosophy
In reply to @samantha
Brad Barrish@bradbarrish
2/1/2024

Like so many things, I think this comes down to incentives, first and foremost. When your main business goal is to drive engagement vs. say, caring about your community, that’s a problem and it will remain a problem.

Philosophy
In reply to @samantha
2/1/2024

@christin @tldr would love to get your thoughts on this if you are open to it. I know it’s a heavy topic but I am so curious, if you have the capacity.

Philosophy
In reply to @samantha
2/1/2024

censor at the client all you want. just let the protocol be permissionless, both in writing and reading (many clients). e.g. email: google can kick me off gmail but no one can kick me off of SMTP (this is actually a bad example because spam prevention wars centralized email too)

Philosophy
In reply to @samantha
Ben 🎩@benersing
2/1/2024

Optionality. Put another way: freedom to “vote” with one’s feet and move on to another node without sacrificing your data / social ties. Permissionlessness at the protocol level is the solution.

Philosophy
In reply to @samantha
grin@grin
2/1/2024

spent a bunch of time thinking about this at lbry. our best answer is that moderation should happen in layers and be opt-in. at the protocol layer, its least moderated. that's either no moderation at all, or maybe rare and low-resolution decisions (eg slashing validators, kicking out nodes, etc)

Philosophy
In reply to @samantha
2/1/2024

i think x is a good example of how it's not possible. elon shouts about free speech on x every day but ppl get suspended and banned constantly. the ppl who run platforms are always going to have to moderate/censor because they have lines they can't cross – either personal principles or bottom lines for the business

Philosophy
In reply to @samantha
Ivy 🌿 q/dau@ivy
2/1/2024

@cassie might have some thoughts

Philosophy
In reply to @samantha
Varun Kumar@vkcs
2/1/2024

I thnk the answer lies in studying these platforms as complex adaptive systems with emergent phenomenas. Maybe we should put each element of these platforms through a social analysis like how a civil engineer puts his steel rods through a stress analysis PS - https://twitter.com/csvarun26396/status/1752953074128490635

Philosophy
In reply to @samantha
Elad@el4d
2/1/2024

Sadly they will forever fail as long as they stay closed gardens. If they want users to have a better experience they have to open up to different clients. Which they probably won't do because it might destroy their rev streams. We are stuck in this Moloch dynamic.

Philosophy
In reply to @samantha
links@links
2/1/2024

Personally I think that police are responsible for enforcing laws, not technology companies. If an uncensored social platform creates gaps for illegal behaviour, police can do police work to enforce the law. Forcing new tech to enforce stifles potential. If police have issues, they should elevate their capabilities.

Philosophy
In reply to @samantha
RoboCopsGoneMad@robocopsgonemad
2/1/2024

It is an inevitable tension. I don't have a good answer, but I have a good example: metafilter. I've been a member of that community for 25+ years and it remains the highest quality signal-to-noise ratio per volume. Their secret? paid human moderators and a 5$ membership fee.

Philosophy
In reply to @samantha
Max Miner@mxmnr
2/1/2024

a possible approach is build a system that gives each individual a personalized layer of censorship (e.g. guardian AI). You’d also need to give communities (e.g channels) a method to set their own behavior rules. You aren’t restricting the platform, but empower individuals to dictate what they want protection from.

Philosophy