This New Yorker cartoon depicting a dog sitting at a computer and informing his pal that “nobody on the internet knows you’re a dog” is a fun, but perhaps overly-simplified look into online identities and the safeguard people find in anonymity when they ‘go online.’ When it comes to the roles we take online- and their impact in what we say, how we behave, what we reveal about ourselves-is often far more complicated than whether or not we show or hide our “dogness.” Hate comments and vitriolic content that pop up on news organizations user-generated-content boards are a great indication of the complications with online identities and like and dislike people interacting together. Because, largely, we have moved past an outwardly antebellum era where people yell racial diatribes or epithets at one another in public. Mostly, that sort of behavior hangs on the margins of society. So why now? Why have the margins moved to the center online, and are there any ways to explain and combat this phenomena? In this blog post, I’d like to take a look at the fascinating and terrifying world of online hate comments and how they fit into broader cultural and social themes that we have yet to deal with.
The digital divide: polarization, group identity, and the breeding ground of hate
Cass Sunstein, a prominent thinker on digital identities and online user-generated content (UGC) writes in his article “Echo Chambers” that while the growth of online communities has done many good things for democracy (freedom of association, diversity of opinion, check on the power of media and the government) it also has the potential to create a so-called “daily dose of me” where individuals fragment themselves into groups that become increasingly homogenous and largely fail to imitate the diversity of content, people, and opinions we encounter in everyday life. In the “real” world, we don’t always have the final say over what opinions we hear or what voices we come into contact with. On the internet, the tendency towards fragmentation also leads to a phenomenon Sunstein coins as group polarization- the more we associate with like-minded individuals, the further the views of such group become skewed and less open to conflicting or contradicting arguments. This phenomenon is relatively commonsense: if a person comes into a group of like-minded people with a (relatively) firm opinion on a specific matter (be it abortion, gay marriage, the Israeli/Palestine conflict), the group will likely firm up and solidify this view even further. Also of note is that this groups’ argument pool is hardly a neutral space- since these are like-minded individuals we are talking about, the ‘argument pool’ is notably lacking with a diversity of viewpoints and thus can lead to increased polarizations within the group. I’m sure we can all imagine instances where group polarization is not such a bad thing. Imagine, for instance, a group of friends who get together with their friend Suzy, who is tentatively thinking about breaking up with her jerk boyfriend. The like-minded group will have a skewed argument pool (“He’s an asshole! He doesn’t treat you well! Remember when he forgot your birthday?), but perhaps it’s the best possible outcome to get rid of the loser and move on. Certainly, this kind of polarization happens all the time whether in public or private spheres.
Online, though, particularly with regards to contentious issues, Sunstein’s theory helps to illustrate how groups can become dichotomized (you’re either democrat or republic, socialist or capitalist, religious or atheist) and thus create margins of opinion that move away from center and act as a breeding ground for contentious language and passionate, sometimes aggressive, and sometimes racist/sexist/homophobic rhetoric. It’s a problem we run into all the time with politics and it has only been exacerbated by online news and UGC that follows. As hot-button issues make their way online, groups of like-minded individuals who have already potentially found haven with one another through a myopic news stream (e.g. only looking at Fox News or MSNBC or chat forums) will come to an article firm in their views and find others to corroborate their stance and most likely be firmed in their resistance to varying opinions- sometimes induced to the point of anger or even rage.
“She’s not like us:” Shunning the group outlier
Still, this group polarization effect doesn’t necessarily explain some of the extreme language found in comments sections that seem to evade and distort group views and values. What’s of note is that statistically, the people “yelling” in these online comment sections via profanities, racial slurs, innuendos, etc., are a small minority of the actual contributors in comment sections. However they work to define group boundaries. The comments that almost immediately get flagged (or don’t even make it to the comment board) are small in number and even smaller in their authors. What’s of note with that is not just how these extremists define themselves, but also how they define the whole conversation- largely creating a bigger issue than what was every at stake in the article published. When someone makes a racist comment online on behalf of some polarized group (e.g. a “republican” claiming that black people are ruining the economy), there will be push back both from within their group as introgroup members seeking to distance themselves from the outlier, and also a pushback from an opposing group, who will jump on the opportunity to label all the group members as racist or whatever label fits the epithet, thus forming a lovely cycle of dichotomized and unrealistically polar worldviews.
Why, why, why?
So, why do these idiots pop up online who think it’s okay to use words like “nigger,” “cunt,” or revert back to times when women were property and blacks slaves? Certainly, they can form via the process of group polarization and complicated processes of identity formation, but there are also a number of simple explanations- 3 worth noting. 1. the principle of homophily-like-minded people congregate with people most like themselves and through discussions become more polarized. 2. the internet is a space of where billions of other people have opinions and are wittier, smarter, and better at saying them than you are. Overt, striking racism is a way to get noticed- even if it is negative attention. It’s like being the guy in high school who always had a snotty remark about everything. You didn’t like him, but you definitely knew who he was. 3. Anonymity is protection and lessens responsibility significantly.
But actually, is it really anonymous?
That last one actually isn’t plain or simple. 10 years ago when researchers were just starting to look into chat forums, online communities, and hate comments online, there were not many systems for measuring where content was coming from and who was responsible for the random words of hatred or racism or sexism, etc. That’s rapidly changing as a result of UGC in online news forums’ comments sections. Since 80% of reader participation is happening via pre and post-editor user comment sections and editors are scrambling to keep up with the sheer volume of messages, most major news sites now insist that their commenters sign in either via an email address, Facebook or Twitter. When a recent hockey match with a winning score made by a black teammate sent twitter users scrambling with racial rage, an organization exposed the identities of the vitriol-throwing tweeters. Some of them lost their jobs (or, at the very least, had some uncomfortable conversations with friends and families). The anonymity is fast fading from this sphere, but is effect has not dulled this kind of hateful discourse just yet.
The real online hate:subtle, coded racism.
Not surprisingly, a lot of the truly potent racist comments that are happening in these spheres are not the overt ones that are written for attention or on the margins of public opinion. Rather, as examined by Jessie Daniels in “Racist comments at online news sites: a methodological dilemma for discourse analysis” these comments are “coded” in common sense: they can appear as abstract arguments that invoke the individual’s rights to engage in “free speech” (like, “this is a free country jimbobx09 and while you may not agree with me, I have a right to me opinion); or accusations of victimhood regarding a push towards political correctness (e.g. “I’m tired of feeling like I have to always apologize for being white. You know, my parents worked very hard to get where they are today, and they were immigrants too”); and seemingly matter-of-fact statements based on implicit racial stereotypes and myths (like- “Look, there is significant research to prove that black women go on welfare more than white women do. You can’t avoid the fact that they take more handouts and also have more children”). This sort of coded racial language is hidden on the grammatical and semantic level and masquerades as a sort of everyday commonsense. It also masquerades as political sensability. Furthermore, it often goes undetected on these comment boards as racist and most board moderators keep this kind of content up because they think it “balances” the argument out (something I’ll examine more in the next blog post). Next time, we’ll look at a few of these common sense codings and the broader patterns they indicate.
For Next Time.
Funny. For such an easy understandable problem (people are still racist. they say racist shit online because they can), there’s a lot left to understand. If this post is some background into the theory of UGC and online hate comments, next time will be examining in practice what these comments look like and some of the practical application to the theory. I also plan to look at the business of online comments (how the news media make money off of them), what journalists and editors have to say about the racism, issues of liability, how news organizations scan for racist comments, and what the medium has to do with the format of expression.