Incidentally, manual moderation is much easier to do on a federated network where each individual instance doesn’t grow huge. Some people complaining that Lemmy isn’t growing to the size of Reddit, but I see that as a feature myself. Smaller communities tend to be far more interesting and are much easier to moderate than giant sites.
Apart from not being that interesting for now, the first line of defence for most is manually-approved sign ups, as far as I can tell.
When the Fediverse grows, I think that weeding out accounts that post slop will be the “easy” part; the hardest part will be to identify the silent bot accounts that do nothing but upvote.
Seems believable. I’m curious, how do lemmy instances protect themselves from ai slop and bots?
Short answer is active moderation and Anubis.
Manual labor, the Communist Party of China pays us to keep Lemmy free of bots and revisionists.
Alt text: You guys are getting paid?
manual moderation, and there are some moderation bots that can detect spam.
Incidentally, manual moderation is much easier to do on a federated network where each individual instance doesn’t grow huge. Some people complaining that Lemmy isn’t growing to the size of Reddit, but I see that as a feature myself. Smaller communities tend to be far more interesting and are much easier to moderate than giant sites.
Apart from not being that interesting for now, the first line of defence for most is manually-approved sign ups, as far as I can tell.
When the Fediverse grows, I think that weeding out accounts that post slop will be the “easy” part; the hardest part will be to identify the silent bot accounts that do nothing but upvote.
I vaguely remember kbin allowing you to see who upvoted a particular post, so it might not be too difficult.
Tough to differentiate bots that only vote from human lurkers who only vote.
Yeah, you’d need some graph analysis. Bots will all simultaneously upvote certain things, and over time a pattern should emerge.
They don’t, but they are uninteresting for now