This week, we learned that the notorious Israeli cyber-arms-dealer NSO Group had figured out how hijack your Iphone or Android phone by placing a simple Whatsapp call, an attack that would work even if you don't answer the call.
Apple has received a lot of praise for the security of its Ios devices, which are said to be so secure in part because of Apple's walled garden strategy, which prevents Iphone owners from running third-party software unless it comes through Apple's App Store; and which limits who can repair Apple devices, and whether they can use third-party replacement parts. All of this control is said to produce a much more limited attack surface, with fewer bugs, which are corrected more quickly.
However, there are several cyber-arms-dealers that are in the business of selling exploits to hijack control over Apple's products, from Cellebrite to Grayshift to NSO Group. These companies keep the bugs they exploit a secret, making it harder for Apple to repair them.
Meanwhile, security researchers who want to develop tools to perform forensics on Apple products to determine whether they have been compromised with one of these cyber-weapons are out of luck: Apple blocks the forensic apps from the App Store, and kicks the few that sneak in. That means that in order to test an Apple device, the user first has to jailbreak it -- and jailbreaking Apple devices has gotten harder and harder, as Apple defends its own security (against competing App Stores) while weakening its users' security.
Fundamentally, Apple's security model treats owners of Ios devices as potential attackers and goes to enormous lengths to prevent someone who owns an Ios device from ever learning exactly how it works, so that some processes can run in areas that users can't inspect or control (those processes prevent users from using their Ios devices in ways that benefit them at the expense of Apple's shareholders). If an attacker manages to hijack an Iphone, the attacker's code can run in this special mode that is supposed to be reserved for Apple's own user-control programs, and take advantage of all the anti-user countermeasures Apple has built to protect itself.
This isn't unique to Apple: it's a trait shared by any devices that are designed to control their owners, from inkjet printers to Teslas. If your device is designed to actively prevent you from knowing what it's doing and reconfiguring it to do your bidding, then "bad guys" who take over the device will be able to attack you without you knowing what they're doing and reconfiguring your device to kick them out.
Several iOS security researchers who spoke with Motherboard agree that the iPhone is too locked down for its own good. That makes it very hard for even experts to tell if a device has been compromised without jailbreaking it first, a feat that is not feasible for most users anymore.
“The bad guys will find a way in one way or another. Shouldn't we enable the good guys to do their job?” said Zuk Avraham, a security researcher who studies iOS attacks, and who is the founder of ZecOps and Zimperium.
Avraham said that in the last few months he’s seen a lot of targeted attacks against iPhone users, so many that is “mind-blowing.” He declined to provide more evidence or details about the attacks, however.
Jonathan Levin, a researcher who has written books about iOS and macOS internals and security and provides training on iPhone security, said that in his opinion, so few iOS zero-days have been caught because they are worth a lot of money, and thus more likely to be used in targeted attacks.
It’s Almost Impossible to Tell if Your iPhone Has Been Hacked [Lorenzo Franceschi-Bicchierai/Motherboard]