Once upon a time, online services differentiated themselves from competitors by promising not to lock their users in (memorably, Flickr offered an API that would let you export all your photos, your social graph and the comments and other metadata to any other service that had a similar API), but as slumbering antitrust regulators allowed wave after wave of mergers and acquisitions, so tech become Big Tech, walled gardens made a roaring comeback, with services quietly shelving the ability to move between them.
Policy circles are buzzing with renewed interest in antitrust: high profile summits and important legal/technical proposals are signs of the times, and monopolism is increasingly in the news. "Big Tech" has become synonymous with "digital monopolist."
Since 2017, Google, Microsoft, Twitter and Facebook have been collaborating on a Data Transfer Project to define a standard, secure way for users to extract, delete, port or analyze their own data, and DTP has just released its white paper describing the scope of the project, which is more ambitious than I'd have believed possible even a year ago.
Signficantly, the protocol described would allow competing new services to bootstrap themselves into existence by allowing new users to import their data from the existing giants. It's especially remarkable that Facebook — who established a legal precedent banning this practice — went along with this.
I think there are two compatible ways of thinking about this: first, the companies' senior management are anxious to forestall antitrust regulation. When Freedom From Facebook confronts Sheryl Sandberg at the World Economic Forum in front of an audience of competition regulators, she can wave the Data Transfer Project around as proof that Facebook doesn't need to be forced to divest itself of its ancillary businesses or put itself under direct regulation of a special master who will keep it from repeating its past monopolistic sins.
Secondly, smart tech employees — increasingly flexing their muscles and scaring the shit out of their bosses — know that their companies aren't loyal to them, and that chances are they're going to some day find themselves involved in upstart companies challenging their current employers. When that day comes, they want to have the interoperability measures in place that will let them do the stuff their intransigent managers won't let them do inside the increasingly conservative corridors of Big Tech, which has become so horizontally integrated that almost anything it does will challenge some pocket of the business, whose internal stakeholders will make their bosses' lives a living hell if they try it.
Also noteworthy is Apple's absence from this coalition. Apple is the original roach motel, whose users check in but can't check out. From proprietary dongles to proprietary file-formats to DRM on apps, Apple is secondly only to Sony in its deep culture of lock-in. The company may happily run ad campaigns exhorting users to "Switch" to its products, offering the ability to read files created with rivals' products, but when you save your files again, Apple will try to get you to do so in proprietary formats that no one else can read.
It's true that Apple doesn't have a social network that your data lives in, but Apple users do generate a hell of a lot of data that would be useful to extract from Apple's silos and move to rival platforms. I'm not holding my breath, though.
Our vision for this project is that it will enable a connection between any two public-facing product
interfaces for importing and exporting data directly. This is especially important for users in
emerging markets, or on slow or metered connections, as our project does not require a user to
upload and download the data over what may be low bandwidth connections and at potentially
significant personal expense.
The DTP was developed to test concepts and feasibility of transferring specific types of user data
between online services. The recently published international standard on Cloud
Computing—Interoperability and Portability (ISO/IEC 19941:2017)—notes that "there is no single
way of handling portability issues within cloud computing." Each portability scenario needs
separate analysis of the systems, data and human aspects (involving syntactic, semantic and policy
facets), both for the sending cloud service as well as the receiving one.
Moreover, for each portability scenario, issues of identity, security and privacy must be addressed
by both the initial cloud service and the receiving one. With this community-driven effort, the DTP
Contributors expect to learn more about the development of tools and techniques to facilitate
portability of specific, desirable user data between providers. Establishing an open source
ecosystem for the creation of these tools is an important step for understanding and enabling data
portability for our users, within a community-driven, inclusive, multi-stakeholder process. This will
also help prioritize those data portability scenarios that the community and users deem in-demand,
in a pragmatic, real-world setting.
The DTP is powered by an ecosystem of adapters (Adapters) that convert a range of proprietary
formats into a small number of canonical formats (Data Models) useful for transferring data. This
allows data transfer between any two providers using the provider's existing authorization
mechanism, and allows each provider to maintain control over the security of their service. This
also adds to the sustainability of the ecosystem, since companies can attract new customers, or
build a user base for new products, by supporting and maintaining the ability to easily import and
export a user's data.
Data Transfer Project Overview and Fundamentals [Data Transfer Project]
Microsoft, Google, Facebook, Twitter Announce "Data Transfer Project" [Catalin Cimpanu/Bleeping Computer]