A painted sign attached to a tree reads Danger - Beware of Bull

By Andrew Losowsky, Head of Coral

Since we started our work in 2015, we’ve been studying the history and lessons of online communities from the past 30 years.


We’ve interviewed hundreds of moderators, community members, and people harassed out of online community spaces. We’ve participated in academic studies, conferences, and conversations, and been cited in a draft code of practice from the EU.


As a result of this work, we believe that most community platforms, including Facebook and Twitter, are built on a series of myths about online communities – myths that conveniently maximize growth at all costs, primarily to benefit advertisers and investors.


At Coral, we believe that this needs to change. An online community should first and foremost serve the members of that community. To do this effectively, we must be prepared to sacrifice growth for quality of engagement.


We have chosen to reject the prevailing myths about how to build community software. At the time of writing, more than 60 newsrooms in 13 countries, including the Wall Street Journal and the Washington Post, have chosen to join us on this journey. You could too.


We’re now ready to share some of what we’ve learned so far, and how we’ve applied it.

Each day this week, I’ll discuss a widely-held myth about online communities on this page, and show how we’ve taken this knowledge to design a different kind of community experience.


I’d love to hear what you think about our approach – you can leave me a note in the comments, and try our platform for yourself.


Quick links:
Myth 1: Everyone should be able to be reached by everyone.
Myth 2: You can trust the crowd.
Myth 3: Communities should be addictive.
Myth 4: Moderation can be performed by algorithms.
Myth 5: Bigger is better.

Myth 1: Everyone should be able to be reached by everyone.

Online communities grow by making new connections – that’s why the first action in most apps is to follow other people.


But it’s also possible to be too accessible. If anyone can contact you, appear in your mentions, or follow everything you say, it also makes it easy for a bad actor to abuse you, harass you, stalk you, and generally make your community experience incredibly unpleasant.


In our study of gender nonbinary people of color, women of color, and online commentary, participants talked about having to constantly run a cost-benefit analysis on participation in any online conversations, based on the likelihood that they would be attacked, merely for participation.


If the platform’s design pushes freeform interactions between loosely connected people, its members are very likely also to be vulnerable to attack, thereby sacrificing safety and privacy for an ever-increasing network.


 


What makes Coral different:

  1. We don’t allow commenters to see each other’s comment histories
  2. We’ve seen bad actors stalk people across a community via a ‘comment history’ feature, replying with abuse to every comment they’ve written on the site as a way to make them feel afraid and unwelcome. We want to stop that happening. We show you when someone joined, and will be adding opt-in profiles. But we don’t want to enable the worst actors on our platform, in order to grow engagement.

  3. We don’t have @ mentions
  4. Other people should not have the right to demand your attention without your permission. We might one day introduce some form of tagging, but only opt in, and not as a core feature.

  5. No Follows
  6. Platforms that start with Following as a core function also enable easy stalking and mob harassment by design – while also punishing those who choose to opt out, by diminishing their community experience. We’re researching ethical, opt-in ways to have notifications of contributions by your favorite commenters, without making them a tool for stalking and abuse. Until then, we refuse to release this feature in a half-baked way for the sake of growth.

  7. No private messaging
  8. Our platform is designed for public conversations and interaction. Sites can limit access to subscribers, or other kinds of users, but we don’t allow one-on-one conversations, to remove the enabling of hidden abuse on our platform.

Myth 2: You can trust the crowd.

Most online platforms rely on upvotes or ‘likes’ to automatically surface the best content. The comments that receive the highest number of upvotes are then displayed more prominently, and sometimes pinned to the top of the discussion as the ‘Best comments’.


But while the room is smarter than any individual, online votes are not a trustworthy metric. On some platforms, they can be easily generated by bots. Elsewhere, people are being recruited to upvote certain comments as part of deliberate political online astroturfing, because of the prominence that most systems place on the ‘best’ comments.


Some platforms also rely on a calculation of Upvotes minus Downvotes to choose top comments to be highlighted. The problem is that downvotes could encourage mobbing and more negative behavior (though a recent study on Reddit found a much reduced correlation – more study is needed here, though one signal worth paying attention to is that Facebook backtracked from their own flirtation with downvotes.)


The ‘Like’ button itself has also been called into question. A small but fascinating study from the Center for Media Engagement found that people in the study most often clicked ‘Like’ only on opinions with which they already agreed. If they disagreed with a comment, it didn’t matter how well argued or reasonable it appeared, people would be much less likely to ‘Like’ it. ‘Liking’ seems connected to our core sense of who we are and what we believe.


Pure numbers, or even algorithmic calculation of votes, are not enough to avoid being gamed in an attempt to manipulate public consciousness and move the Overton Window. Voting is at the core of a functioning democracy, but on the internet, nobody knows why you clicked that reaction – and nobody knows if you’re 100,000 bots.


 

What makes Coral different:

  1. Top comments are chosen by journalists, not votes
  2. The first comments you’ll see in a Coral community are those selected to be Featured by journalists or community moderators. After all, a key function of journalism is the curation of information that is worth your time, so why should the comments be any different? Users can still sort comments by those that receive the most reactions, but this is a much less prominent or frequent activity. We are currently working on a variety of ways to help surface the potentially best comments for the moderators to select.

  3. We use ‘Respect’, not ‘Like’
  4. Based on the results of the study mentioned above, we’ve made our default reaction a ‘Respect’ button. This not only encourages people to respond positively across ideological boundaries, but also suggests that the goal in a Coral community is to be ‘the most respected’, not ‘the most liked’ person in the room – a subtle but important difference.


Myth 3: Communities should be addictive.

If a community platform’s revenue comes from marketing data and advertising, its design requires a dangerous combination of surveillance and addiction.


This can have serious consequences for the wellbeing of your community.

Facebook co-founder Sean Parker told Axios about their main goal in creating the social network:  


“The thought process … was all about: ‘How do we consume as much of your time and conscious attention as possible?’… And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever. And that’s going to get you to contribute more content, and that’s going to get you … more likes and comments. It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”


This is known as the Hook Model, manipulating people into compulsive behavior that keeps them on a website or in an app. It is achieved through designing for constant, unpredictable stimulation – red dot notifications, subtle animations, “Someone is typing” messages.


The uncertainty they provide, combined with the pseudo-urgency of their language, constantly triggers our flight-fight instincts and provides an addictive hit of dopamine each time. This is what keeps people coming back, even if they don’t understand why. That’s why Instagram only lets you pause notifications for short periods of time.


Dopamine rushes feel good in the moment, but are also thought to have various behavioral effects. High amounts of dopamine can make people more aggressive and reduce quality of sleep, while studies have recorded positive effects when people’s dopamine levels go down, including increased likelihoods to give to charity, to cooperate with others, and to be empathetic. (See the citations in this academic paper for more information.) 


So, by using psychological hacks to keep people returning and show them more ads, these addictive platforms can make the tone of your community more aggressive, less empathetic, and potentially damage the emotional state of the participants. Even Facebook is rethinking the impact of its strategy of permanent, no-opt-out red dots.


As Chamath Palihapitiya, Facebook’s former Vice President for Growth, now says, “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works.”


The external costs of addictive design is a society filled with addicts. We all deserve better.


 

What makes Coral different:

  1. Our notifications are opt in, and via email
  2. We don’t push urgent red dots onto commenters anywhere in our interface. If you want to be informed when someone replies to you, or when a journalist replies to your comment, you choose to be notified via a platform (email) that you already manage in your routine, instead of adding yet more spaces to be anxious about tracking. We also give the choice of receiving hourly or daily digests instead of instantaneous notifications, letting you dictate the pace of your alerts.

  3. We don’t create a feed of one-click, low cost reactions for you to constantly refresh
  4. Someone tapping “Respect” on one of your comments is nice, but it isn’t worth interrupting your day to shout that it’s happened. You can find the numbers of Respects and Replies on your comments from inside your profile, but we’re not going to yell at you or make a sound about it each time it happens. This is a conversation, not a video game.

  5. We don’t tell you when someone is typing
  6. When a new comment has been submitted, you can read it. That’s it. No fake build up of tension. The same for unnecessary animations, blinking, or changes as you read. New comments only appear when you choose to load them.

  7. No gamification for its own sake
  8. We don’t display ranked lists of “people who’ve written most comments” or offer trophies or badges for quantity of contributions. We want people to submit thoughtful comments because they want to engage in the discussion, not because they’re trying to increase usage numbers for their own sake, to receive a tiny dopamine spike and a fake reward.

  9. No infinite scroll on launch
  10. If you want to view more of the conversation, you can choose to do so. You’re in control. We don’t want to keep you longer than you want to be reading.

  11. We highly limit our use of the color red
  12. Red in most cultures means warning, alert, caution. We use it very sparingly in our designs, and never throw it in simply because something new but unimportant has occurred. We value you and your attention far more highly than that.


Myth 4: Moderation can be performed by algorithms.

We’ve heard some wild promises in sales pitches about algorithms. We’ve never heard any academic experts in the field express similar levels of confidence in how good algorithms are at moderating conversation.


The robots definitely have value, but artificial intelligence has a long way to go before it can be trusted to take over comment moderation – in fact, some researchers believe that we’ll never reach that point. It turns out that humans are endlessly creative at finding ways to insult and harass each other, in ways that algorithms can’t easily predict or detect. It also turns out that Artificial Intelligence (AI) isn’t very good at detecting sarcasm. We were shocked to learn this.


The biggest issue with any AI is the data that it’s trained on. Bias is inherent in all data collection, and it’s not easy to fix – for example, if African American Vernacular English is missing from your training data set, someone using it  innocuously in your community might be tagged by a bot as using toxic language, and moderated accordingly. Safiya Umoja Noble’s excellent book Algorithms of Oppression is a terrific read about some of the consequences of these omissions for algorithmic decision making.


AI also learns only from historical attacks, so it’s by nature going to be one step behind at all times – not to mention that, when abusers know that an AI is learning from their behavior, they start to modify their attacks to use innocuous phrases (content warning: online abuse) either to avoid AI detection (content warning: offensive language) or with the goal that the AI will learn these phrases and then label them in the future as abuse. 


We believe that AI-powered algorithms definitely have a place in making the work of human moderators faster and easier, and in holding back comments for further review, but they can’t be relied upon to do the work for us, as witnessed by the tens of thousands of content moderators still employed by Facebook, Google, et al. 


As BuzzFeed recently put it, the role of the comment moderator is the most important job in the world right now. We believe every community should celebrate that fact by employing skilled human moderators, and giving them proper levels of technology, training, and compensation to support their vital, often traumatic work. 




What makes Coral different:

  1. We invest heavily in creating world-class moderation tools
  2. The moderation tools should not be an afterthought. We put as much care into designing the moderation interface as we do the commenter experience. We conduct ongoing user research and surveys to ensure that moderators on our platform can efficiently identify and deal with the worst members of a community, so that they can also spend time encouraging and thanking the best.

  3. We tell community members if the algorithm thinks they are breaking the rules
  4. We want to encourage people to be their best selves – after all, given the right circumstances, anyone can act like a troll. If someone writes a comment that our system thinks might be abusive, we first report that back to the commenter, and give them a single opportunity to improve their behavior. One newsroom using Coral reported a 40% drop in their moderation queue after activating this feature. If the commenter submits something that is still potentially abusive, we hold it back from publication.

  5. Humans make the final call
  6. We believe in AI-powered human moderation. If the final submitted comment is flagged by the AI, we give it to a human moderator to make the final call. The algorithm is not empowered to make any decisions for itself, except occasionally in the most clear-cut cases.

  7. Moderators can see and adjust the AI’s sensitivity
  8. For every comment, we display to moderators the likelihood that the AI would flag it. This makes it easy for each community to decide if the flagging threshold is set correctly for their needs. You can, of course, opt out of using the AI altogether.

  9. You can connect other AI systems to Coral
  10. Our default configuration uses Perspective API, developed by Google Jigsaw with our input, but we also allow communities to connect other third-party AI systems. There’s a lot of smart people working on these issues, and we want you to be able to use whichever best suits your needs (and your language).

  11. We help companies support their moderators
  12. We provide guides and training for how to support people doing the most important job in the world (copyright BuzzFeed). We also follow industry best practices and participate in conversations about how to better manage and support those who do this work.

  13. We don’t allow image uploads
  14. Traumatic images and video have a huge impact on the health of people moderating your community. AI is still very bad at identifying abusive images beyond simple nudity detection, so allowing image uploads adds significant danger to community and moderator health (not to mention huge amounts of potential copyright violations.) We believe it’s not worth the risk. Until we there is a safe, reasonable way to include image uploads, we don’t believe it’s ethical to offer them in our platform.

  15. We are exploring AI for more than detecting abuse
  16. We have a number of experiments pending around using AI to detect good as well as bad comments for moderators to take note of, helping make their work easier and also more pleasant. If you’re someone who works in this space and is interested in working with us on that, get in touch!

    Myth 5: Bigger is better. 

    This is a lie that only serves companies whose business model is low-quality advertising.

    The goal of these companies, and there are many of them, is to grow communities as fast as possible, to keep people coming back by any means necessary (see Myth 3: Communities should be addictive), then track everyone across the web, selling more ads plus granting access to programmatic marketers. It’s always about numbers, never about quality.


    This is pure surveillance technology wearing a comments box mask. That’s why these comments platforms can offer their services for free – because their true customers are not the publishers using their tools.


    It doesn’t matter to these companies whether the thousands of new people they bring into a space each day might hate everything that that community stands for. In fact, that might even be better – after all, when it comes to measuring pure engagement, it turns out that fear and outrage are the most lucrative emotions.


    Needless to say, we don’t think this leads to healthy online dialog. 


    Communities benefit from being specific, not general, focused on a particular location, topic, belief, experience, hobby, intention, and so on, with clear, transparently enforced rules about appropriate behavior. 


    If your main goal is scale at all costs, then you have to discard all of that. This quickly leads to some very bad choices, and a space that is more likely to create harmful discussions. That’s why we don’t believe that free is a price worth paying.  




    What makes Coral different:

    1. Our communities are decentralized
    2. Each site’s Coral community exists in an entirely separate database, isolated from the others. When a commenter creates a Coral account on a publisher’s site, they are registered only with that publisher. To join another Coral community, they have to create a whole new account – and this is a feature, not a bug. It reduces potential security risks, prevents any undesirable cross-pollination between very different spaces, and allows community members to maintain individual settings and identities for each space they join. Privacy for the win.

    3. We don’t have ads, we don’t hide ad tech in our code
    4. Our business model is transparent and straightforward: companies pay us to run and support the software that powers their communities. That’s it. This means that we work for the community owners, and nobody else. It also means we won’t be placing any potentially malicious or resource-heavy scripts onto your website without your permission. Want to see for yourself? Unlike the others, you can just look at our code.

    5. We don’t pull strangers into your comments
    6. We won’t grab people from one website’s community and push them onto another. Far too often, that leads to the wrong kind of engagement, and can prevent productive conversation. With Coral, it’s not about the numbers, it’s about the quality.

    7. Sometimes, we encourage sites not to have comments
    8. Not caring about scale for its own sake means we can focus on what’s important. As part of our strategic consulting, we encourage publishers to keep comments closed on any articles that are likely to lead to vicious or offensive interactions, and instead offer a different method of engagement. We also make it easy for them to close comments manually, or automatically after a certain time. Community owners should only make promises they can keep, on a scale they can manage. Everyone is better served by higher quality discussions, even if it means reducing the number available at any one time.

    These myths are making online interactions worse, but there’s one huge falsehood that underpins all five: that problems with online communities can be fixed only with technology.


    Racism, misogyny, insults, harassment, bullying, abusive behavior…. these are first and foremost societal and cultural problems. Technology can create the conditions where behaviors are encouraged or rewarded, reduced or discouraged – but the underpinning issues are bigger and more intricate than software glitches to be fixed through better code and design. 


    To run a community of any kind is to enter a morally complex space, containing profound questions around acceptable speech, punishment and appeal, implicit and explicit bias, and various other social complexities, many of which are thousands of years old. 


    This is why we offer more than just technology. We invest significant resources into research and training materials, and conduct ongoing academic and user studies. We partner with each publisher pre- and post-launch, to help them define strategy, write guidelines, outline goals, and allocate resources, both before and after Coral first appears on their site. Our technology can help, but it’s never the whole story. 


    At Coral, our sole focus is on helping you create the most productive, successful online experience for your community, achieved through intelligent strategy and technology that is specifically designed around real best practices, selected for everyone’s benefit, and not purely to maximize growth.  


    The internet is a real place, and we want to make it better. That’s what makes us different. 


    Want to see Coral in action? Sign up for a webinar here.

    Want to get a price for using Coral? Request a quote here.

    Follow our work by signing up to our newsletter.

    Any questions? Ask us or leave a comment below.