Joi Ito has published the "1.0" version of his October essay, Resisting Reduction, which makes major advances on the earlier draft. He's soliciting revisions and comments here. Here's what I wrote about it then:
Joi Ito's Resisting Reduction manifesto rejects the idea of reducing the world to a series of computable relationships that will eventually be overtaken by our ability to manipulate them with computers ("the Singularity") and instead to view the world as full of irreducible complexities and "to design systems that participate as responsible, aware and robust elements of even more complex systems."
Ito says that Singularity thinking is at the root of unrestrained pursuit of profit that tramples human flourishing, and he says that we should focus on "vigor and health rather than scale or power" in measuring our systems and deciding which ones to preserve and which ones to change.
A lot of this stuff is abstract and hard to lay hands on, but I feel like he's onto something. I think that one way of slicing this problem is to divide interventions into "things that fail well" and "things that work well." When a system preserves the ability to audit its function, to discuss those audits, and to band together to act on them, it fails well — a system like the TSA's security practices are riven with secrecy, shot through with gag-orders and confidentiality, and are largely unaccountable to the people who they affect. That makes it hard to describe their problems and hard to fix them.
That doesn't mean that if the system was transparent and accountable that we'd find solutions to the hard problems of air terrorism, but if such a solution could be found, then transparency and accountability would make it much easier to implement.
Another example: free vs proprietary software. Software being free doesn't mean that anyone will ever look at its source-code and try to fix it, but if they do, we can all check their work. Whereas proprietary software (and super-proprietary software, like DRM, whose defects are a felony to disclose) fails very badly: mistakes in the code fester until they have been so widely exploited that they can no longer be denied. Heartbleed proved that being open wasn't a guarantee of being scrutinized and improved, but no one seriously argues that Heartbleed would have been better if it was illegal to look at the sourcecode for Openssl.
Resisting Reduction [Joi Ito]
(via Beyond the Beyond)