In February, Impermium was named one of 10 finalists for the Most Innovative Company at RSA® Conference 2012. At the event, CEO Mark Risher presented on the rapidly-growing problem of social spam and discussed how a lack of commercial products that provide adequate protection against abuse in social channels, has driven many sites to develop their own solutions that often employ user-unfriendly and costly solutions.  Mark explains in his brief presentation how social spam evolved from its email predecessor and how Impermium successfully mitigates abuse in the social web.  Watch the presentation or read the transcript below.

Mark Risher, Impermium, RSA Conference 2012

Thirty-five years ago, Gary Thuerk sent the first email spam message, an advertisement for digital computers.  And as we know, that was just the tip of the iceberg of a problem that got much more serious and much more prevalent.

As email became saturated with spam, the bad guys needed to find the next place to move to, so they followed us to the social web.  What we’re going to talk about now, is the next frontier of web abuse, something we call social spam which includes reputation hijacking, social spear phishing, malicious user generated content, account takeover and more.

Just as with email spam, these are problems that started out as a nuisance but quickly grew to something much more severe and much more dangerous, as was witnessed this weekend with the Whitney Houston malware scam on many popular media sites.

While working on the world’s largest email platform, Yahoo! Mail, we saw first hand that as email spam reached 95% of all traffic, the bad guys started moving and looking for new places to innovate.  And where they moved to was the social web in droves, where they were excited by the rapid influx of new users and by the relatively low levels of skepticism or mistrust in this new form of communication.

So how do people deal with these problems today?  Many companies deal with a virtual sweatshop, where they have individuals looking at every single account, every single transaction, and every single comment.  Not only does this fail to scale, but it is slow, reactive and misses many types of problems.  Other companies deal with this though complex, Rube Goldbergian systems of rules and configurations where they try to identify these patterns.  These are also prone to many false positives and they can be very expensive to maintain.  In fact, the smallest media company we’re dealing with has more than five people devoted just to these sorts of problems.

So how does Impermium work?  Our customers from across the web, companies large and small, send their traffic or social transactions to us in real-time.  Our machine-learning behavioral models then crunch through not just the content, but also the context or the user behind the transaction, for spam, hate speech, malware, cross-site scripting and the like.  And through this technology, we’re able to provide a network effect that is much more effective than what any in-house solution could possibly provide.

So leaving here today, what you should each do is think about the social features that your company offers and that you rely upon on a daily basis.  Think about how many of those are vulnerable to these types of social attacks.  And then, consider how these companies would be able to build defenses on their own without a partner such as Impermium.  Because it’s starting to look a lot like the problems that we saw back in 1978 and we don’t want to miss the writing that’s on the wall.  Thank you.

Q&A

Question #1:  With your solution, you know there’s whitelist and blacklist in user participation, how much of your spam and violations are identified through people alerting you and how much is machine-learning and automated?

Mark:  A large part of our work, and where we’ve put our patent and our development effort, has been into behavioral models and anomaly models that are mostly machine learning.  We definitely use humans, because it’s important to have a signal when somebody’s telling you, “Hey, this is bad,” and you want to train off of that.  But we get a leverage of probably 10,000:1 to 10,000,000:1.

Question #2:  So you mentioned that with some of the other products out there, there’re false positives.  What are you doing different in terms of the false positives and how are you measuring that.

Mark:  What we do for false positives is, by looking very broadly across a wide variety of sites, we maintain not only examples of bad but also examples of good.  And our trusted-user database spans from sites like a sports site to a blog to a fantasy-dating site, allows us to really understand who are the good users and to differentiate whether it’s somebody cracking a joke about Viagra or truly trying to distribute it.

Question #3:  Does it have any impact on spear phishing, where it’s a directed individual that has been socially compromised?

Mark:  Absolutely.  With spear phishing or with social spear phishing, we look at the account compromise vector, which is where there is a deviation from the normal patters of that user to this particular case.  And unlike a homegrown, siloed solution, because we’re looking broadly, somebody would need to check your sports blog and your resume page and your LinkedIn account before they get to the compromise moment.  That gives us much more robustness.

Question #4:  Do people opt-in to this or do you insert yourself into the workflow of something like Google or Facebook or where do you fit?

Mark:  Sure.  We operate as a subscription-based SAAS model and we’ll tell you more at our booth right over there.

Adam Nisbet

Adam Nisbet, Impermium’s media analyst, is researching emerging trends at the intersection of social media and security. He is a social media maven, an artist, a trekking enthusiast, and an esteemed jazz aficionado. Living in the Bay Area, it’s hard not to love the outdoors and traveling along the Pacific coast.

Facebook Twitter LinkedIn Google+