Surprisingly Good Evidence That Real Name Policies Fail To Improve Comments

YouTube has joined a growing list of social media companies who think that forcing users to use their real names will make comment sections less of a trolling wasteland, but there’s surprisingly good evidence from South Korea that real name policies fail at cleaning up comments. In 2007, South Korea temporarily mandated that all websites with over 100,000 viewers require real names, but scrapped it after it was found to be ineffective at cleaning up abusive and malicious comments (the policy reduced unwanted comments by an estimated .09%). We don’t know how this hidden gem of evidence skipped the national debate on real identities, but it’s an important lesson for YouTube, Facebook and Google, who have assumed that fear of judgement will change online behavior for the better.

Last week, YouTube began a policy of prompting users to sign in through Google+ with their full names. If users decline, they have to give a valid reason, like, “My channel is for a show or character”. The policy is part of Google’s larger effort to bring authentic identity to their social media ecosystem, siding with companies like Facebook, who have long assumed that transparency induces better behavior.

“I think anonymity on the Internet has to go away,” argued former Facebook Marketing Director, Randi Zuckerberg. “People behave a lot better when they have their real names down. … I think people hide behind anonymity and they feel like they can say whatever they want behind closed doors.” For years, the national discussion has gone up and back, between critics who say that anonymity is a fundamental right of privacy and necessary for political dissidents, and social networks who worry about online bullying and impact that trolls have on their community.

Enough theorizing, there’s actually good evidence to inform the debate. For 4 years, Koreans enacted increasingly stiff real-name commenting laws, first for political websites in 2003, then for all websites receiving more than 300,000 viewers in 2007, and was finally tightened to 100,000 viewers a year later after online slander was cited in the suicide of a national figure. The policy, however, was ditched shortly after a Korean Communications Commission study found that it only decreased malicious comments by 0.9%. Korean sites were also inundated by hackers, presumably after valuable identities.

Further analysis by Carnegie Mellon’s Daegon Cho and Alessandro Acquisti, found that the policy actually increased the frequency of expletives in comments for some user demographics. While the policy reduced swearing and “anti-normative” behavior at the aggregate level by as much as 30%, individual users were not dismayed. “Light users”, who posted 1 or 2 comments, were most affected by the law, but “heavy” ones (11-16+ comments ) didn’t seem to mind.

Given that the Commission estimates that only 13% of comments are malicious, a mere 30% reduction only seems to clean up the muddied waters of comment systems a depressingly negligent amount.

The finding isn’t surprising: social science researchers have long known that participants eventually begin to ignore cameras video taping their behavior. In other words, the presence of some phantom judgmental audience doesn’t seem to make us better versions of ourselves.