Kids Pass is a service that offers discounts on family activities in the UK; their website makes several common — and serious — security problems that could allow hackers to capture their users' passwords, which endangers those users' data on other services where they have (unwisely) recycled those same passwords.
Troy Hunt from Have I Been Pwned? is an advocate for "responsible disclosure," the practice of informing companies before revealing defects in their products and services and giving them time to do something about it.
But responsible disclosure is a two-way street: sometimes, companies don't live up to their end of the bargain. Kids Pass repeatedly ignored researchers who reached out them, going so far as to block them in some instances.
Other companies have threatened legal retaliation against researchers who discovered defects in their products — in some cases, they've even had their critics arrested, thanks to overbroad laws like the Computer Fraud and Abuse Act and Section 1201 of the Digital Millennium Copyright Act.
Getting companies to respond meaningfully to critics is a hard problem — as hard a problem as discovering the defects in the first place. In an ideal world, every company would responsibly respond to security defects by quickly patching them and warning their customers — in the real world, companies are often negligent and even belligerent when they learn about their flaws.
That's why Google went public with a serious Windows vulnerability earlier this year — they'd repeatedly warned Microsoft about it, Microsoft had done nothing, so they decided to sound the alarm, so that Microsoft customers could make informed decisions about trusting Micosoft products.
The ability of researchers to publish without fear of retaliation is the balance that disciplines companies into offering bug-hunters meaningful promises about their discoveries. If companies can simply sue any researcher who displeases them into silence, the world loses one of the most compelling cases for formulating a good bug-bounty program with promises of quick remediation and disclosure.
That's why it's so problematic that the World Wide Web Consortium proposed voluntary guidelines for when not to sue security researchers who found defects in implementations of the W3C's web DRM standard. The right way to ensure responsible disclosure is to defend the principle that it's always legal to tell the truth about defects in products, so that companies must use enticements, not threats, to secure the cooperation of researchers.
So now here you have multiple people finding the same risk which would allow for the easy exfiltration of data en mass, and this is just the guys we know about. Who else found it and didn't let them know? (And incidentally, in light of evidence that customer data was accessed by multiple unauthorised third parties, will there be any disclosure forthcoming to their existing subscriber base?) Who else actually scraped customer data out of their system? I've got no idea and they may not have either, but what I do know is that we've seen this particular class of risk exploited multiple times before.
For example, in 2010 114k records about iPad owners were extracted from AT&T using a direct object reference risk. The following year it was First State Superannuation in Australia who were found exposing customer data in the same way and again, the "exploit" was simply incrementing numbers in the URL. Same again that year with Citigroup in the UK having an identical risk and time and time again, we see this same simple flaw popping up.