Scrutinizing mobile apps: privacy violations, bloat, and poor security


Troy Hunt installed the HTTP proxy Fiddler on his network and used it to examine the way that iPhone apps performed. What he discovered was a series of shockingly poor implementation decisions that massively bloat the bandwidth needed to load and use apps (important for users whose mobile phone plans contain strict bandwidth caps); poor password security (important for mobile users who roam to untrusted WiFi networks); and aggressive, over-the-top surveillance of your activities by apps that harvest every click, as well as your location, and send them to third parties.

I doubt that these issues are unique to iOS devices. Rather, they represent facts in evidence about the limits of software "curation" to guarantee robust, safe, secure software. It's vanishingly unlikely that any app store with hundreds of thousands (or millions) of apps will be able to subject them to the kind of scrutiny that Hunt engages in here. Combine that with the opacity of the platform, which makes it hard for independent auditors (and users!) to discover what their mobile devices are doing and how they're doing it, and you've got a recipe for a mobile ecosystem that subjects users to high bandwidth fees, invasions of privacy, and compromise of their passwords.

Expert curation of code is a good step towards secure mobile computing, but it's insufficient to keep users safe. Unless platforms are designed with the objective of allowing scrutiny of their inner workings -- something that is at odds with business-models that rely upon establishing exclusive rights to approve and distribute software for a platform -- then they should be assumed to be running apps that are riddled with these sorts of defects.

Suddenly monetisation with powerful data starts to make more sense.

But this is no different to a tracking cookie on a website, right? Well, yes and no. Firstly, tracking cookies can be disabled. If you don’t like ‘em, turn ‘em off. Not so the iOS app as everything is hidden under the covers. Actually, it’s in much the same way as a classic app that gets installed on any OS although in the desktop world, we’ve become accustomed to being asked if we’re happy to share our activities “for product improvement purposes”.

These privacy issues simply come down to this: what does the user expect? Do they expect to be tracked when browsing a cook book installed on their local device? And do they expect this activity to be cross-referenceable with the use of other apparently unrelated apps? I highly doubt it, and therein lays the problem.

14

  1. From the bloat aspect this sounds like the old PC issue of “the killer app”.  New phones come out every year, why spend extra resources on efficient code when that next processor will have twice the power!  I know it’s a lazy/fast way to program, but it probably has a lot of merit even in the mobile app market.

  2. Wasn’t third party curation and rating of applications a huge part of Edelman’s Jump 225 trilogy?  Surely there’s a market somewhere for the Amazon of Apps.

  3. I can’t say I’m surprised — downloading huge images for tiny screens, tracking every tap, passwords in the clear.  This kind of crap goes on all the time on normal websites, so it’s about what I expected on mobile apps.  I’d love to have the standard Javascript security model for apps (i.e., the app can only talk to it’s own domain, not third parties).

    Though I am surprised that Facebook and Qantas are sending passwords unencrypted.  I mostly attribute these kinds of problems to inexperienced developers, lack of rigorous testing, and the general fly-by-night culture of the internet.  But Facebook and Qantas should know better — and they certainly have the resources to think through the privacy and security issues, do proper testing, and avoid these problems.

    There are probably too many apps to test them all.  But Apple and other platform vendors should start testing at least the most popular apps and making sure this kind of stuff doesn’t happen.  Or maybe somebody like the author could setup a certification service, and award a seal of approval for apps that were using bandwidth carefully, protecting passwords, clearly acknowledging data sharing, etc.

    1. The only un-secure traffic that is sent/received by the Facebook iPhone app is photo CDNs. All login/activity traffic is done over https.
      I’m an Internet traffic analyst and have done similar tests as Troy but with different software.

      Everything else he says is pretty much spot on, apps are farked :)

  4. I have noticed a few apps that engage in this kind of shoddy networked behaviour, but only through anecdotal observations. 

    I remember the early versions of scrabble needed a CONSTANT connection to the internet, and drained the battery commensurately. They finally fixed that about six months ago, but I am not surprised that this is endemic in the App Store. 

  5. It would appear that, in addition to engineering improvements, burning down the “Flurry” offices with all personnel  and data inside would be a necessary step to secure mobile computing.

  6. I’ve actually gotten very few apps for my Android, because even simple things will require privileges  like full internet access, access to my full contact list, access to my location.  Why would, say, a calculator need that?  Because it’s not primarily a calculator.  It’s primarily a data aggregator and repackager, with a calculator thrown in on the side. 

    What’s worse, the phone came with a bunch of apps on it, installed by Verizon, which can’t be removed.  It’s bad enough that I’m ready to switch back to my simple only-a-phone phone, and my PDA.

    I would love to see an independent consumer group set up to certify apps/phones for these kinds of issues.

  7. Is there anything preventing an app creator or company from opening their own code to public scrutiny?

  8. With regards to the bandwidth issue, I say we take this to the logical next step:  do a packet capture of our smartphone data, figure out what data we didn’t actually want to send or receive, and print it all out on reams of paper.  Then, mail it back to the wireless companies and ask for a refund of the data in our accounts.

  9. Expert curation of code is a good step towards secure mobile computing, but it’s insufficient to keep users safe. Unless platforms are designed with the objective of allowing scrutiny of their inner workings — something that is at odds with business-models that rely upon establishing exclusive rights to approve and distribute software for a platform — then they should be assumed to be running apps that are riddled with these sorts of defects.

    So (if I understand this right), by this logic the closed, curated platform (let’s say iOS), should be far less secure and far more riddled with malware than the open source, uncurated ecosystem of say, Android. …and yet this does not seem to be the case. In fact, the reverse seems to be true.

    Any explanations as to why this is so? I suppose the rote answer is that although for Android the platform is OS, that the apps are not.

    It’s a bit of a stretch, though, as the percentage of people who have both the ability and desire to manually audit the code of all of their applications is in fact vanishingly small. …and if you don’t audit it yourself, why then you’re right back in the same boat of having to trust someone else.

  10. I’m not convinced that the closed nature of this market really makes that significant a difference. Any tech-savvy person can monitor the network traffic. Mr. Hunt evidently did this — I’ve done it numerous times. You can identify and publicly shame applications that are oversharing.

    There is, of course, the risk that app authors will start using encrypted channels. No doubt this will make the work harder. This is not a risk unique to closed source — I could send encrypted private data from an Android app just as easily. Once you suspected that I was doing something nefarious, it’d be easier to confirm on an open platform.

    But the “many eyes make bugs shallow” tenet of the Open Source world turns out to be only partially true. It requires people caring. I’ve been involved with F/OSS projects where egregious security bugs escaped notice for years, only because nobody bothered to look. When attackers found the issue, it was brought to the attention of the community, and fixed. This is often how it works in closed environments too.

    I’m not going to argue that a closed system is as secure as an open one. But simply making something open doesn’t offer magical guarantees that it’s going to be better. And if can’t get people concerned about posting their intimate business up on Facebook, I suspect we’ll have a hard time getting them worked up about this.

Comments are closed.