Gianluca Stringhini – Cyber criminal operations and developing systems to defend against them

Gianluca Stringhini’s research focuses on studying cyber criminal operations and developing systems to defend against them.

Such operations tend to follow a common pattern. First the criminal operator lures a user into going to a Web site and tries to infect them with malware. Once infected, the user is joined to a botnet. From there, the user’s computer is instructed to perform malicious activities on the criminal’s behalf. Stringhini, whose UCL appointment is shared between the Department of Computer Science and the Department of Security and Crime Science, has studied all three of these stages.

https://www.youtube.com/watch?v=TY3wsqGOZ28

Stringhini, who is from Genoa, developed his interest in computer security at college: “I was doing the things that all college students are doing, hacking, and breaking into systems. I was always interested in understanding how computers work and how one could break them. I started playing in hacking competitions.”

At the beginning, these competitions were just for fun, but those efforts became more serious when he arrived in 2008 at UC Santa Barbara, which featured one of the world’s best hacking teams, a perennial top finisher in Defcon’s Capture the Flag competition. It was at Santa Barbara that his interest in cyber crime developed, particularly in botnets and the complexity and skill of the operations that created them. He picked the US after Christopher Kruegel, whom he knew by email, invited him to Santa Barbara for an internship. He liked it, so he stayed and did a PhD studying the way criminals use online services such as social networks

“Basically, the idea is that if you have an account that’s used by a cyber criminal it will be used differently than one used by a real person because they will have a different goal,” he says. “And so you can develop systems that learn about these differences and detect accounts that are misused.” Even if the attacker tries to make their behaviour closely resemble the user’s own, ultimately spreading malicious content isn’t something normal users intend to do, and the difference is detectable.

This idea and Stringhini’s resulting PhD research led to his most significant papers to date.

In Shady Paths: Leveraging Surfing Crowds to Detect Malicious Web Pages (PDF), written with Christopher Kruegel and Giovanni Vigna and presented at CCS 2013, Stringhini analysed the paths by which a large and diverse group of Web surfers reach the compromised pages attackers seek to lead them to.

“The idea there,” he says, “is that if you look at real users browsing the Web and the network of redirections that they are following and you aggregate them together to a certain target Web page, by looking at what the graph of these redirections looks like you can tell if the Web page is malicious or not.” The idea sounds a little like what Google’s search algorithm does in deciding which page is the most relevant to a particular query but, says Stringhini, rather than looking at incoming links and assessing their quality he is instead looking at the geographical distribution of the servers on the basis that their user population has characteristics that are typical of malicious activity.

“If a certain Web page is only infecting users running Windows XP, let’s say, the attackers typically only have Windows XP going there. If you’re not running Windows XP they send you somewhere else.” The reason is simple: such servers do not want to expose their malicious code to everyone to help keep it undetected. For the analysing researcher, however, that selectiveness is a clue that the site may be malicious. Similarly, a user’s jumping from the US to China to Russia to Africa is suspicious: “It’s not something you would see in legitimate HTTP traffic.”

In the resulting paper, “The cool thing is that we don’t look at the page at all. Previous attempts look at the malicious page and try to detect if it’s trying to infect you.” The difficulty with such systems is that attackers can evade detection by obfuscating their code. Worse, such systems tend to detect specific attacks – drive-by downloads, say – and not others, such as social engineering. Stringhini’s system detects any type of attack. Also, systems that depend on analysing pages’ content don’t scale well because they must examine every page. A final advantage of Stringhini’s system is that content can be far more easily changed than behavioural characteristics.

In Detecting Spammers on Social Networks (PDF), presented at ACSAC 2010, working with Christopher Kruegel and Giovanni Vigna,  Stringhini studied the methods criminals use to create fake accounts for the purpose of spreading malicious content on social networks. Following up on that work, in COMPA: Detecting Compromised Accounts on Social Network (PDF), written with Manuel Egele, Christopher Kruegel, and Giovanni Vigna and presented at NDSS 2013, he developed software to help detect compromised accounts. The researchers were able to collaborate with Twitter, and tested COMPA by setting it to detect worm attacks on historical data. He presented both these papers at the 2014 ACE Cyber Security Research meeting.

Currently, Stringhini is investigating the underground economy behind these operations seeking an improved understanding of the ecosystem: “Before going and developing technical systems that block these kinds of things you need to understand what these cyber criminals are doing, how they are doing it, and so on.” His paper, The Harvester, the Botmaster, and the Spammer: On the Relations Between the Different Actors in the Spam Landscape (PDF), presented at AsiaCCS 2014, analysed the different components that make up a spam operation. “The idea is that there are different actors that are very specialised in what they’re doing,” he says. Each piece of the progression described above is handled by a different party: collecting email addresses, infecting users, renting out a botnet. “The idea was to find out how these people interact.” Sample questions included, “Do they sell lists to multiple people? Do they rent botnets to one person or multiple people?”

Researching these questions began with disseminating the researchers’ own email addresses on the Web and logging who harvested them. Later, when people began connecting to these addresses, they were able to use a system they’d invented to fingerprint the email engines inside of bots to see which specific botnet was making the connection. “So we could start building a pipeline: Guy A harvested this address, then I was contacted by Botnet B, which was spreading this email campaign that Spammer C is responsible for.”

Given these chains of actors, “What we found out was that basically spammers seem to be very consistent in what they use, so there is essentially a trust relationship between a harvester and their botmaster.” Those relationships may be durable and long-lived; the spammer typically goes on using that one botnet until it’s shut down. Conversely, both the harvesters and the botmasters will sell their products to a wide range of customers. The thing the group didn’t find was a single spammer using two botnets simultaneously, presumably because it costs more and it is difficult to manage multiple instances of the dashboards these services supply to show what the bots are doing.

Stringhini participated in the US authorities’ attempted 2011 takedown of the Cutwail botnet, an effort that gave him access to the databases of more than a third of that botnet’s command and control servers. The recent paper, Tricks of the Trade; What Makes Spam Campaigns Successful? (PDF), written with Jane Iedemska, Richard Kemmerer, Christopher Kruegel, and Giovanni Vigna and presented at the IWCC 2014, shows the results: it discusses the efforts spammers make to smooth their botnets’ operation and maximise their profits. Among the findings, using too many bots exhausts the server’s bandwidth. Bots also vary in price around the world: UK and US are more expensive than India, for example. The most successful spammers turned out to buy the cheapest bots. In other areas – such as stealing financial information – the more expensive bots do net more money. The bots also turn out to report back non-functional email addresses so the botmaster can clean the list. This last data point led Stringhini to wonder if it might be possible to remove addresses by telling the bots they don’t exist. “What we showed is that if you tell the bot you do not exist you stop getting spam from that campaign. But if you do it excessively, they exhaust the bandwidth of the server and won’t get enough orders back.” The financial rewards vary. The most effective spammers and harvesters make money; the less capable ones struggle to break even.

Overall, says Stringhini, “The best criminals are very smart, and adapt very fast to countermeasures. That’s why in my research I try to leverage the elements they cannot really change, because if they change the operation will become less profitable.” If criminals have to make changes to evade detection that slow them down and cost them some profit, even if they can continue operating, “That’s still sort of a win for us.”

 

This post is part of the Inside Infosec series, summarising the research and teaching done by UCL Information Security group members. This research portrait was written by Wendy M. Grossman.

Leave a Reply

Your email address will not be published. Required fields are marked *