tl;dr: Our first product is a reputational analytics tool that takes existing comment log data to give indicative reputational scores to publishers.
That information can then be used by publishers to i) identify their best commenters so their contributions can be highlighted and become the basis for future journalism based on their own definition of “good”, ii) give moderators a sense of context about each commenter, and iii) help publishers understand the nature of commenter behavior on their site.
Now read below to understand why we’re starting here.
Comment sections on news sites are getting a lot of bad press these days. And why wouldn’t they? They can be uncivil, profane, impenetrably longwinded, or even personally intimidating.
But as a person who interacts with Washington Post comments and commenters regularly, I can attest that readers contribute thoughtful, heartfelt, and insightful comments all the time. Unfortunately, some of them get lost in a larger, often unwieldy mix.
The idea behind comment sections — creating space for people to discuss current affairs with each other — isn’t inherently flawed. The way we publishers ask for and respond to user contributions, however, could use a refresh.
And that’s where The Coral Project hopes to move the needle.
With our first product, announced at the Online News Association conference in Los Angeles, The Coral Project aims to give publishers a tool to help them better manage their roster of commenters and contributors, making it easier to highlight the best reader contributions. It aims directly for one of the project’s main goals: to better connect news publishers and the readers who choose to contribute to them.
The product will gather a mix of existing data points from readers of comments and other user contributions, such as likes, shares, and flags, and pair it with new data, such as ratings of commenters from users and editors, reporters, and community mangers. The result will be a system that manages the reputations of users, allowing publishers to stratify their users by levels of trust.
Our aim is to test the product with partner organizations in early 2016. We’ll work with those organizations to not only test our software, but also iterate on new approaches to asking for and responding to user contributions.
In the end, this product will be part of a suite of tools that our group will create between now and the end of our grant period in 2017. Coming up with this product was a true collaborative effort: we’ve had incredible support from the Knight Foundation and our three organizations, and we’ve incorporated ideas from our conversations with more than 300 people in over 175 organizations from more than 25 countries, all of whom were incredibly generous with their time.
We’re still working on many of the new product’s details, so we’re eager to hear suggestions from publishers, contributors, readers, and others about the best approach to reputation management. Anyone interested in contributing ideas can contact us here.