It’s often said, “if it ain’t broke, don’t fix it,” and who has time to think about security when there are a million other items competing for attention.  So it comes as no surprise that a large number of our referrals are from companies who failed to prepare and have just suffered a significant social spam attack.  To say the least, it’s an interesting time to start a relationship with a client! In the face of these attacks, tech companies often respond similarly, enlisting their developers to stop the attack as quickly and painlessly as possible, though in doing so, they wind up making the same. Through these incidents, Impermium has gleaned some interesting insights into these common mistakes, and how companies can best protect themselves in the event of an attack.

Day-of-Attack Mistakes:

When incidents are discovered, it’s usually an emergency situation: Your site is exposed, your users are complaining, and you don’t know how much more the servers can handle. Even under these circumstances, it’s still important to take a step back and think through your response to avoid these errors:

  • Forgetting that you’re playing chess: Defending against abuse is truly an arms race, a cat-and-mouse game in which every move you make will be countered by your adversary. Scammers are smart, so when building custom defenses, it’s important to consider what that counter-move will be, and whether it has the potential to make things harder for you in the future. Case in point: Shutting down accounts based purely off of a “tell,” like a bad IP address, exposes your defense mechanism loud-and-clear, and the scammer’s next move is likely to come from a hundred distributed IP addresses, making detection that much harder for you. Instead, use that “tell” to flag your suspicious accounts, then disable them somewhere down the road.
  • Not making it harder enough: Attackers have a lot of patience and bandwidth, and launching an attack is cheap. Therefore, complicated schemes designed to waste their time and encourage them to attack somebody else need to factor in the tiny cost they may pay to do both. Automated scammers are routinely posting to thousands of sites every hour, often without bothering to confirm it worked; make sure you’re not building features that are trivial to circumvent.
  • Breaking out the textbooks: Often what engineering-led companies do is grab the old Bayesian Probability textbook (i.e. Wikipedia page) and try to hack together their own elaborate system. These types of adversarial security systems are very costly to build properly the first time due to hidden complexities in logic, feature engineering, and mitigation, and the ongoing maintenance is a continuous drain on resources. We have seen many cases where the newly dubbed “abuse team” created well-intentioned but easily-circumvented systems, leading to a misplaced overconfidence in their homegrown system. Building an abuse system based on only a narrow view into past attacks can easily look promising, but be completely ineffective towards new ones. Worse, many rush-to-production systems fail to account for the ever-evolving nature of both good and bad users, and end up affecting many legitimate users (i.e. causing mysterious false positives) down the road, often in difficult-to-diagnose ways. In this case, as the old saying goes, you get what you pay for.
  • Booby trapping: Following an attack, companies enlist the help of programmers and engineers to figure out what the attack is and where it’s coming from; unfortunately, their next step is often to scatter “logic bombs,” pieces of code designed to trip up the attackers based on an assumption of their techniques. In the heat of the moment, this may seem like a good idea, but distributed abuse logic, tricky features, and ill-planned moderation scripts can become a major cause of site instability down the road when innocent users are caught up in the trap. For example, traps that ban users without JavaScript have caused disastrous effects on mobile device users, visually-impaired users employing screen reading software, and others.
  • Shutting it down: While it sounds drastic in the abstract, in the face of an attack, often management’s first reaction is to disable a given feature – e.g. by shutting off all social elements except for paying users. Companies should resist that initial temptation to treat all users as guilty-until-proven-innocent. In very extreme cases, temporarily suspending engagement features may be warranted, but turning your back on your legitimate visitors can actually compound the problem. Preventing valuable users from posting, or making them jump through extra hoops (like CAPTCHA), can have disastrous impact on engagement. Yes, it requires a high-level of patience and trust, but breaking your site for the good users is the worst thing you can do.

Avoiding these mistakes will help you more quickly recover from a spam attack.  But then what? When the chaos has calmed, you can look for ways to protect your site in the future. And that’s exactly what I’ll cover in Part 2.

 

 

Mark Risher

Mark Risher is CEO and Co-Founder of Impermium. As the former “Spam Czar” for Yahoo!, he has regularly presented worldwide to government, industry, and consumer groups about spam, abuse, and cyber security issues.

Facebook Twitter LinkedIn Google+ YouTube