The University of Toronto's Citizen Lab (previously) is one of the world's leading research centers for cybersecurity analysis, and they are the first port of call for many civil society groups when they are targeted by governments and cyber-militias.
Citizen Lab's John Scott-Railton has published a fascinating analysis (originally published in IEEE Security & Privacy, but since updated) of the most common tactics deployed against civil society and makes some recommendations about how these groups can defend themselves -- as well as recommendations for how the platforms that civil society groups should retool to protect their users.
One of Scott-Railton's most interesting points is that civil society groups are canaries in the coal-mine: the attacks they face from well-resourced attackers generally become more automated and thus more available to petty crooks and other untargeted attackers who go after broad swaths of the population. That means that platforms can use the attacks these groups face as a preview of their coming security challenge -- and by defending civil society, they armor their whole user-base against those coming threats.
The attackers in Scott-Railton's data are more socially sophisticated than they are technologically sophisticated. Almost all of the time, these attackers are using old, known hacks as weapons (not sexy, unpublished, "0-day" exploits). They rely on the idea that their targets -- overtaxed civil society activists -- are likely to have unpatched systems that can be targeted by these older exploits. Even if many of the members of a target group have updated, it's often sufficient to attack a single laggard (remember, to get inside a group's messages, you just need to compromise one member of that group!).
Scott-Railton is very strong on two-factor authentication, though he notes that attackers have used extremely sophisticated social engineering to beat this. Nevertheless, his top recommendation is for everyone to turn on 2FA, and for platforms to make things like 2FA mandatory, so that overtaxed users don't ever opt-out.
It’s human nature to want to help, to be curious, and to respond to a sense of fear and urgency. This natural urge presents an endless opportunity for attacks that rely on deception and trust exploitation. The PGEAs targeting civil society groups share the emphasis on targeting human behavior as the primary entry point for their campaigns. This “just enough” principle holds for many more sophisticated a ack groups, even with harder targets, as the head of the US National Security Agency’s Tailored Access Operations recently pointed out at a security conference (www.youtube.com/watch?v=bDJb8WOJYdA). Social engineering has always been a game of probability, and most organizations contain members who are more likely to be taken in than others.
A growing cottage industry is “security training” that focuses on increasing civil society’s awareness of surveillance and malware and on shifting security behavior. This development is promising but deserves a stronger evidence-based footing and numbers-driven repeat testing, such as the use of regular phishing simulations and penetration testing.
Even when large user populations become vigilant and change their behaviors, attackers have shown a remarkable ability to adapt their techniques in response. Still, a range of good security technologies attempt to account for some of these risks; the challenge is ensuring that they’re systematically enabled.
Security for the High-Risk User: Separate and Unequal [John Scott-Railton/Citizen Lab]
(via The Gruqq)