anonymity and privacy seem to come at odds with a social platform’s ability to moderate content and control spam.
If users have sufficient privacy and anonymity, then they can simply use another identity to come back, or use multiple identities.
Are there ways around this? It seems that any method of ensuring that a banned user is kept off the platform would necessitate the platform knowing information about the user and their identity
Great points; thanks! On that, there’s a couple of ways I could see content moderation involving more personal freedom and choice:
give users more general types of content. For example “block all content containing ____ racial slur”. Could be made more complex as well, especially with how open source language models are coming along
give users the ability to follow another user’s content self-moderation choices. Consequently, a group of users can all be part of a group where, if one user flags content or type of content, it applies to others. The niceness of this is that it would be extremely fluid and you can opt out with a button.
This could lead to better moderation in my opinion, and less disconnect between moderators and users.
Does not solve the anonymity issue, but that’s for another comment.
Those are reasonable options - though I’m pessimistic enough to believe that trolls will get better than every automated system, so we’d probably want some manual options. I wouldn’t say it’s not possible - just would require quite a bit of work, and would likely be an ongoing battle to improve your auto-moderator.
It feels like I’m moving the goalposts, so apologies, but your response got me thinking further. The other big advantages I can think of for central censorship is that it can actually prevent hosting of content - which has two benefits: