How Trolls Wreak Havoc For Users and Site Owners (Part 2)Written By: Adam Nisbet
Jan 15, 2013 • 10:02 am No Comments
In “Trolling: An Age-Old Problem That Isn’t Getting Better” I gave an overview on trolling, including a brief history and overview on the common methods and motivations of Internet trolls. Now that we’re up to speed and you’ve read a few examples, let’s talk about the danger and legal issues that trolls present to users and site owners.
Legal Issues for Site Owners
Recently, we have seen that abusive user actions have caused major pains for online communities and brands. Whether it be a strike to a site’s reputation, a legal battle concerning vitriolic user content, or violent crimes that originate and are perpetrated through online channels, the negative impact is hard to ignore. But for now, most sites are not required to screen for unlawful or abusive content in advance.
In the U.S., prosecutors have limited statutes to press charges on offenders, and often default to sexual harassment or federal indictment on violent threats, neither of which accurately target the nature of these offenses. The Yale Law Journal states that in the U.S., under § 230 of Title 47 of the U.S. Code, websites are not liable as publishers for the content on their sites so long as they are not involved in the creation of the objectionable content. Recently, legal scholars have been describing the responsibilities of site owners, as proprietors, “which must exercise the power of control or expulsion of third parties from their premises who may be present, to prevent injury to the visitor.“
However, the laws are increasingly being challenged as prosecutors search for more insight from site owners into the origination and execution of these types of actions. Recently, judges ordered Yelp to delete a review from its site due to a pending legal dispute over defamation of character; later a higher court would order that the content be reinstated. Regardless, the site owners must deal with a broad range of legal fees and liabilities that arise from malicious user actions. Holding website owners responsible for failing to provide reasonable care to their visitors should be at the forefront of business concerns.
Reddit’s ViolentAcrez Troll
Motive: Sexual Harassment, reaction/shock, attention, forum-fame
Danger: Promoting a culture of hate and sexual exploitation, promoting the sharing of crude content, compromising the privacy rights of minors, and more.
Mode: Working within a discreet community forum, deceit, using a handle or username — remaining somewhat anonymous, collusion with other participants sharing content.
On October 12, 2012, Adrian Chen, a writer for Gawker released an exposé outing the real-life identity of a controversial Reddit moderator, ViolentAcrez. ViolentAcrez, whose real name is Michael Brutsch, created numerous subreddits aimed at curating some of the worst, most graphic images on the net. Brutsch exerted considerable effort to troll Reddit, posting offensive and despicable images, including photos of dead bodies, depictions of rape, and violent domestic abuse, simply to anger others and induce a reaction. One of his most profane subreddits, called “Jailbait”, contained revealing photographs taken of minors without their knowledge or permission and presented in a sexual manner. CNN legal contributor Sunny Hostin called it “borderline kiddie porn.” Despite the nature of his posts, ViolentAcrez submissions were so popular, that Reddit presented him with a trophy for his contributions. Once Chen released his report, Brutsch was immediately fired from his job. Though he has not been charged for any crimes.
Reddit claims that it is not responsible for the content that its users post on the website. However Jeffrey Toobin, CNN Senior Legal Analyst, noted that while the website may not be breaking any criminal laws, its claim that it cannot interfere with its posters because they are protected by the First Amendment is “not true.” Toobin explained on CNN’s AC360 last year:
If I say something terrible to you on the phone, you can sue me — you can’t sue the phone company… A website is different. A website automatically exercises some control. You can see they have rules there. So the idea that they have no control over their posters, that’s simply wrong.” Jeffrey Toobin
Amanda Todd Troll
Motive: Sexual exploitation
Danger: Harm to minors, suicide, social shaming, rapid-sharing, slow legal response to privacy concerns, privacy of minors in open forums
Mode: Anonymity, fake profiles, deceit, shaming, and humiliation
The individuals and events surrounding the Amanda Todd story present one of the saddest trolling cases in history. Her personal testimony of the brutality and insensitivity of online trolling and cyberbullying has given us insight into this dark world, and has served as a wakeup call about the possible dangers of online communities.
The series of events is difficult to retell, so we will keep this concise. For several years, a group of men ranging from 16 to 30 years old communicated with Amanda through online chat forums. These men are called “cappers,” individuals who lurk in chat rooms for the sole purpose of targeting an underage girl and pressuring her to strip in front of a webcam. The cappers record the conversations and then trade the illegal content between themselves. In the case of Amanda Todd, the cappers remained in contact with her for 2-3 years, with the sole purpose of blackmailing and harassing her. Two individuals were said to be the key players in this case, who began their blackmail assault by distributing illegally obtained videos, which contained nude images of a minor, among her classmates. Amanda faced constant harassment from her schoolmates and the emergence of distasteful online memes. Feeling she had nowhere to turn, she committed suicide.
The Nike LeBron James Troll
Motive: Racism, Spreading hate and igniting violence
Danger: Violent Threats to school children
Mode: Commenting on articles in ESPN website
In October 2012, Eric Yee was arraigned on charges of possessing an illegal firearm (an H&K M-94 assault weapon), and held on $1 million bail. The U.S. District Attorney’s prosecutors were also considering charges of violent threats with intent to harm others with a deadly weapon. Yee was a troll on ESPN, and began posting comments to a popular article on the release of a new Nike LeBron James shoe. Initially, the discussion was harmless, but it quickly erupted into a stream of racist and insulting comments. Several trolls were pushing the racist discussion and supporting each other’s motives. While Yee wasn’t alone, he took the conversation to a new level when he unleashed a tirade of hate speech and violent threats, remarking on the recent Aurora shootings. The final straw occurred when Yee threatened to shoot and kill nearby school children that he could see from his window. ESPN notified the FBI, who immediately ascended upon Yee’s residence, discovering the firearms.
These case study demonstrate, in varying severity, how malicious content, bullying, and violent threats can quickly grow out of hand in online forums.
Luckily, ESPN was proactive and reached out to authorities immediately after reading the comments. However, most online communities, especially new or smaller sites, have no monitoring system or compliance protocols in place to deliver timely information to law enforcement agencies. Given how fast communications can spread across the Internet, any hesitation could greatly stall the prevention of serious crimes, and greatly increase the likelihood of a lawsuit.
Even though laws in the U.S. aren’t up to speed to address these issues, in the U.K., laws are quickly adapting to address cyberbullying and Internet libel issues. As a result, convictions have soared up 150% in the past 4 years. There, the law states that if posts are not pre-moderated, the operator of the hosting website is not legally liable for the comments until it has been notified of the content that violates a person’s privacy rights. Following a request for removal from a user, the website operator must act “expeditiously” to remove illegal content in order to maintain the defense. These laws take a big step towards protecting online users, which means companies need to be better prepared.
Adam Nisbet, Impermium’s media analyst, is researching emerging trends at the intersection of social media and security. He is a social media maven, an artist, a trekking enthusiast, and an esteemed jazz aficionado. Living in the Bay Area, it’s hard not to love the outdoors and traveling along the Pacific coast.