on fandom and content policing
While we’re all having a good laugh and/or panic at tumblr’s incompetent censorship implosion, I just want to take this opportunity to draw a parallel to a lot of the recent fandom wank about what content should or shouldn’t be allowed on AO3. Specifically: there’s a lot of people who want the Archive to ban particular types of fic, but who have no real understanding of how you would actually implement that in practice.
While there are legitimate arguments to be made about the unwisdom of tumblr’s soon-to-be-forbidden content choices - the whole “female-presenting nipples” thing and the apparent decision to prioritise banning tits over banning Nazis, for instance - the functional problem isn’t that they’ve decided to monitor specific types of content, but that they’ve got no sensible way of enacting their own policies. Quite clearly, you can’t entrust the process to bots: just today, I’ve seen flagged content that runs the gamut from Star Trek: TOS screenshots to paleo fish art to quilts to the entire chronic pain tag to a text post about a gay family member with AIDS - and at the same time, I’ve still been seeing porn gifs on my dash.
It’s absolute chaos, which is what happens when you try to outsource to programs the type of work that can only reliably be done by people - and even then, there’s still going to be bad or dubious or unpopular decisions made, because invariably, some things will need to be judged on a case by case basis, and people don’t always agree on where the needle should fall.
Now: consider that this is happening because tumblr is banning particular types of images. Images, at least, you can kiiiiinda moderate by bots, provided you’re using the bot-process as a filter to cut down on the amount of work done by actual humans, and also provided you’re willing to take a huge credibility hit given the poor initial accuracy of said bots, but: images. Bots can be sorta trained to recognise and sort those, right?
But the kind of AI sophistication you’d need to moderate all the content on a text-based site like AO3? That… yeah. That literally doesn’t exist, and going by tags and keywords wouldn’t help you either, because there’d be no handy way to distinguish what type of usage was present just on that basis alone. Posts about content generated by neural nets are hilarious precisely because our AI isn’t there yet, and based on what we’ve seen so far, we won’t be there for a good long while.
It’s a point I’ve made again and again, but I’m going to reiterate it here: it’s always easy to conjure up the most obvious, extreme and clear-cut examples of undesirable content when you’re discussing bans in theory, but in practice, you need to have a feasible means of enacting those rules with some degree of accuracy, speed and accountability that’s attainable within both budget and context, or else the whole thing becomes pointless.
On massive sites like AO3 and tumblr, the considerable expense of monitoring so much user-generated content with paid employees is, to a degree, obviated by the concept of tagging and blocking, the idea being that users can curate and control their own experience to avoid unpleasant material. There still needs to be oversight, of course - at absolute minimum, a code of conduct and a means of reporting those who violate it to a human authority in a position to enforce said code - but the thing is, given how much raw content accrues on social media and at what speed, you really need these policies to be in place, and actively enforced, from the get-go: otherwise, when you finally do start trying to moderate, you’ll have to wade through the entire site’s backlog while also trying to keep abreast of new content.
Facebook, which is a multi-billion dollar corporation, can afford to have paid human moderators in place for assessing content violations instead of relying on bots; however, it is also notoriously terrible at both following its own standards and setting them in the first place. To take an example salient to the tumblr mess, Facebook has an ongoing problem with how it handles breastfeeding posts, while its community standards regarding what counts as hate speech are, uhhh… Not Great. Twitter has similarly struggled with bot accounts proliferating during multiple recent elections and with the seemingly simple task of deplatforming Nazis - not because they can’t, but because they don’t want to take a quote-on-quote political stance, even for the sake of cleaning house.
It’s also because, quite frankly, neither Facebook nor Twitter were originally thought of as entities that would one day be ubiquitous and powerful enough to be used to sway elections; and when that capability was first realised by those with enough money and power to take advantage of it, there were no internal safeguards to stop it happening, and not nearly enough external comprehension of or appreciation for the risks among those in positions of authority to impose some in time to make a difference. Because even though time spent scrolling through social media passes like reverse dog years - which is to say, two hours can frequently feel like ten minutes - its impact is such that we fall into the trap of thinking that it’s been around forever, instead of being a really recent phenomenon. Facebook launched in 2004, YouTube in 2005, Twitter in 2006, tumblr in 2007, AO3 in 2009, Instagram in 2010, Snapchat in 2011, tinder in 2012, Discord in 2015. Even Livejournal, that precursor blog-and-fandom space, only began in 1999, with the purge of strikethrough happening in 2007. Long-term, we’re still running a global beta on How To Do Social Media Without Fucking Up, because this whole internet thing is still producing new iterations of old problems that we’ve never had to deal with in this medium before - or if so, then not on this scale, within whatever specific parameters apply to each site, in conjunction with whatever else is happening that’s relevant, with whatever tools or budget we have to hand. It is messy, and I really don’t see that changing anytime soon.
All of which is a way of saying that, while it’s far from impossible to moderate content on social media, you need to have actual humans doing it, a clear reporting process set up, a coherent set of rules, a willingness to enforce those rules consistently - or at least to explain the logic behind any changes or exceptions and then stand by them, too - and the humility to admit that, whatever you planned for your site to be at the outset, success will mean that it invariably grows beyond that mandate in potentially strange and unpredictable ways, which will in turn require active thought and anticipation on your part to successfully deal with.
Which is why, compared to what’s happening on other sites, the objections being raised about AO3 are so goddamn frustrating - because, right from the outset, it has had a clear set of rules: it’s just not one that various naysayers like. Content-wise, the whole idea of the tagging system, as stated in the user agreement, is that you enter at your own risk: you are meant to navigate your own experience using the tools the site has provided - tools it has constantly worked to upgrade as the site traffic has boomed exponentially - and there’s a reporting process in place for people who transgress otherwise. AO3 isn’t perfect - of course it isn’t - but it is coherent, which is exactly what tumblr, in enacting this weird nipple-purge, has failed to be.
Plus and also: the content on AO3 is fictional. As passionate as I am about the impact of stories on reality and vice versa, this is nonetheless a salient distinction to point out when discussing how to manage AO3 versus something like Twitter or tumblr. Different types of content require different types of moderation: the more variety in media formats and subject matter and the higher the level of complex, real-time, user-user interaction, the harder it is to manage - and, quite arguably, the more managing it requires in the first place. Whereas tumblr has reblogs, open inboxes and instant messaging, interactions on AO3 are limited to comments and that’s it: users can lock, moderate or throw their own comment threads open as they choose, and that, in turn, cuts down on how much active moderation is necessary.
tl;dr: moderating social media sites is actually a lot harder and more complicated than most people realise, and those lobbying for tighter content control in places like AO3 should look at how broad generalisations about what constitutes a Bad Post are backfiring now before claiming the whole thing is an easy fix.