Why haven't cyberinsurers exerted more pressure on companies to be better at security?

For decades, people (including me) have predicted that cyberinsurers might be a way to get companies to take security seriously. After all, insurers have to live in the real world (which is why terrorism insurance is cheap, because terrorism is not a meaningful risk in America), and in the real world, poor security practices destroy peoples' lives, all the time, in wholesale quantities that beggar the imagination.

But the empirical data shows that insurers routinely write policies for companies that do incredibly stupid shit, and don't offer meaningful discounts to companies that have good security policies (you don't even have to keep your patchlevel current to maintain your insurance!). Instead, insurers focus on "post-breach services" that help companies get back to work after the breaches have taken place.


In a forthcoming paper in IEEE Privacy and Security, two computer scientists (Oxford U, U of Tulsa) investigate this question, documenting the dismal state of insurers' requirements for cyberinsurance, and the ease of making claims, even for incidents that were utterly preventable.


One possibility that the authors don't delve into: cyberinsurance is cheap because the penalties for breaches are laughably light. While it's true that some incidents (e.g. ransomware) have a direct operational cost to the company, the vast majority of incidents involve data-breaches that affect the company's customers or stakeholders.


The lack of a statutory damages regime for breaches means that customers whose data is compromised have to produce receipts for the harms they've suffered before a judge, and since it's hard to quantify those damages (many of them may not be incurred for years to come), which is why Home Depot paid literal pennies to settle claims when it lost 50,000,000 customers' data.

And since all companies mishandle user data, and since it's impossible to tell a company with responsible IT practices from a reckless one until it's too late, we have a Market for Lemons in security.


It seems to me that if you want insurers to put more constraints on the reckless conduct of their customers, you could increase the consequences of failing to do so: if companies had to pay statutory damages to people whose data they lost, insurers would either exclude those losses, impose a system of rigorous security measures and audits, or charge so much more for insurance that companies couldn't afford it, leaving their directors and officers exposed to liability. A couple of high-profile bankruptcies from board members who lost their shirts in lawsuits over company security breaches would certainly get the attention of corporate boards.


Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governanceat present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to in-vest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)


The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart's law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

Systemic risk has a number of possible futures. Organisations may have to accept liability as insurers exclude the risk. Governments might step in to offer re-insurance, though we caution against doing so until an under-sup-ply of cyber insurance is observed. Or insurers might show leadership in encouraging diversity in technology and service provision to reduce systemic risk.


Does insurance have a future in governing cybersecurity? [Daniel W. Woods and Tyler Moore/IEEE Privacy and Security]


(via Schneier)