While the theme was how "we" could bring about a better future by making smart choices about the intersection of technology and the economy, I believe the only way "we" can bring about a better future is through technological humanism.
I started with a personal story.
My parents moved from Hong Kong to the United Kingdom in the 1970s as part of the diaspora, "sent away" by their parents so they, and their children could have a better education and a better life.
I was born in 1979, the year The Usborne Book of the Future was published. The book told an optimistic story of how humanity solve problems and thrive through technology. I'd escape into it. When you're worried about not fitting in, the promise of a future where everyone belongs is alluring.
I was also privileged: a child of academics, my dad brought computers from his university home to share with his family and I'd grow up with them.
One day, he brought home a giant piece of paper, bigger than me: a color CAD plot of the Space Shuttle. It was amazing, he was amazing for snipping out a symbol of a better future and bringing it into our living room.
I would have been six years old.
And like many others, I grew up with paintings of space colonies commissioned by NASA.
They showed how technology would save us.
How technology and science would make everything better.
They showed a future where everyone would belong, and nobody would be left out. Who wouldn't want that?
(Later, I'd look back with experience and wisdom and see everything that wasn't in these paintings of the future, the missing people and cultures.)
And as I grew up, consensus formed: computing would be integral in bringing in about fairness, equity and prosperity for everyone.
When I discovered BBSes in the early 1990s, I was hooked.
My dad's university dial-up SLIP internet account convinced me.
John Perry Barlow's declaration of independence made me a believer: on the Internet, our appearance, our gender, our meat wouldn't matter. Insides would count more than our outsides.
I also grew to understand and accept that we're a tool-making, tool-using species. That technology is just one of waywe solve problems, and solving problems is the only way to create a better world for ourselves and those who come after us.
I think one of the stories technologists told each other was that networked computing would be an unqualified accelerator, an amplifier of the arc of history bending toward justice.
We got part of what we wanted: networked computing is practically ubiquitous now.
Thanks to a complex intersection of Moore's law, neoliberal global capitalism, the military industrial complex, colonialism and some really hard work and thinking, mobile, networked computing got cheap enough.
"Enough" people use networked computers now.
But when that happened, what networked computers enable doesn't just affect hobbyists and fans. They affect others, too. Entire societies. Nations. Everyone.
And yet, everything isn't better. Yes, many have been lifted from poverty. But we're also not living in a drastically improved society I was promised as a child.
When I gave my talk to an elite (exclusive, and problematic) group of technologists, I suspected many would agree everything wasn't going quite as well as had been thought or wished.
Because inequality is rising, not decreasing.
Because of rising hate and abuse, and decreasing trust.
Because of algorithmic cruelty.
Because we need to do better.
I think we — collectively, and definitely not just technologists on their own — need to figure out the societies we want first. The future we want. Have the hard conversations, better understand the compromises, be forced to make clearer priorities and decisions. Then we can figure out the technology, the tools, that can help get us there.
But what can technologists – and everyone else – do now?
As technologists, we must question our gods: the laws, thinking and habits that we assume true and guide our work.
I've got two examples:
The value of a telecommunications network is proportional to the square of the number of connected users of the system – Metcalfe's Law
Applying Metcalfe's law to social networks is a sort of folk understanding that the more users your social network has, the more valuable it is.
The problem is, I like to ask dumb questions. I have two dumb questions about the law:
First: why do we call it a law? Has it been proven true?
Second: what do we mean, exactly, by value?
In 2013, Metcalfe attempted to prove his law in response to critics. He proved it (not a law) by using Facebook's revenue as a definition of value.
But revenue is just one way to measure value. It's a proxy, and doesn't represent everything that individuals, groups or whole societies might value.
For example: does revenue reflect the potential or actual value of harm done in allowing (intentionally or inadvertently) foreign interference in a democratic nation state's elections? Or the harm done to society in both allowing (and signalling allowing) advertising that breaks housing discrimination laws is acceptable, a year after being notified?
Maybe there are many other things that networks do that we might want to value. Or treat as a cost!
Over the last couple hundred years, dominant cultures have done a terrible job of considering negative externalities, the things that we don't see or choose not to value.
It may already be too late for us to effectively deal with climate change.
We must be more deliberate about what we choose to value as a society.
Perhaps it's time for us to retire the belief — because it's just a belief — that networks with large numbers of users without other attributes are an unqualified good.
Read the next example in the rest of my essay, No one's coming. It's up to us.
(An excerpt of an essay adapted from a talk given at #foocamp in San Francisco on November 4th, 2017. Read the full essay, No one's coming. It's up to us on Medium.)