Techno-social engineering is freaking insiders out
I’m guessing there aren’t many of us who are terribly concerned that Skynet will unleash its army of Terminator robots on us. But to hear tech visionaries like Bill Gates and Elon Musk tell it, there’s probably good reason to worry that computers will one day become too smart for our own good.
We already know the Internet is segmenting us into distinct groups based on economic, social, educational, geographic, political and behavioral classifiers, among others. Internet titans rely on these classifiers to “filter” the world for us – which really means they are deciding what stories and opinions we read, which ads and offers we see, and the type of opportunities we receive. They also decide for us what we don’t see.
But they are only just warming up; companies like Facebook and Google are racing to develop artificial intelligence technologies to expand their "deep learning" capabilities. These new technologies will be used to mine our data to more accurately assess who we are and what we want, and – to hear the Internet giants tell it – deliver elegantly tailored experiences that help us better understand and interact with the world around us.
There would be nothing inherently wrong with this if we could be absolutely certain the companies that control this technology will act only in our best interests. But if not, we could all be susceptible to manipulation by powerful systems we couldn’t possibly understand. Some academics have even raised the specter of techno-social engineering and questioned whether we are moving into an age in which “humans become machine-like and pervasively programmable.”
There is no shortage of Silicon Valley insiders who share these concerns. That’s right. Some of the very coders, network architects, security experts and entrepreneurs who are driving big data to new heights understand better than any of us the full consequences of the trade-offs we all make when we provide our data to companies that provide “free” Internet services. And quite a few of these insiders are quietly freaking out.
One engineer told me he was disturbed by the ease with which he and his colleagues casually mine our data, almost on a whim, to see what patterns they might discover, which in turn could give rise to algorithms and models that exploit our personalities and behaviors in ways not yet imagined. “We do it because we can. And because it’s a competitive advantage,” he explained.
Another told me that one big Internet company he worked for had backed up and stored every single piece of data you and I ever left on its servers. The company didn’t dare delete anything because executives and engineers couldn’t possibly imagine all the uses they might discover for our data in the future. We are leaving breadcrumbs for these companies to exploit next year, or perhaps next decade.
Several contacts shared with me the same “what if?” What might happen if and/or when market forces pressure these profit-driven companies to develop ever more sophisticated algorithms, which could in turn underpin new services not necessarily built with users’ best interests in mind?
Here are a few scenarios they came up with:
What if one of the big social networks started offering background checks that predicted and ranked the suitability of job applicants based on each candidate’s data set – regardless of whether the information was “public” or not?
Many of us are starting to use wearable computers on our wrists. What if your insurance company could marry your biometric data with your health history and genetic profile and was able to, for example, predict you were 10 times more likely than average to suffer a heart attack? Might you one day be required by your insurer to live a certain lifestyle in order to minimize its financial risk?
Another contact, who did classified work for one government agency (he couldn’t possibly say which one), offered a different but equally chilling twist. Sooner or later, he predicted, we will all come to fully understand that we won’t be able to say, search, browse, buy, like, watch or listen to anything without our actions and thoughts being sliced, diced, and churned through powerful analytical systems. And then what? Will we, creeped out and perhaps a little afraid, start to second-guess our every move? Will we self-censor our speech and behavior to avoid being labeled?
Scott Allan Morrison was a journalist for almost twenty years, covering politics, business, and technology in Mexico, Canada, and the United States. Morrison arrived in Silicon Valley as a reporter for the Financial Times during the darkest days of the dot-com crash. He later covered the Web 2.0 boom for Dow Jones Newswires and the Wall Street Journal. Over the course of a decade, Morrison covered most of the world’s top tech companies and chronicled many of Silicon Valley’s greatest stories, including the rise of Internet insecurity and the explosion of social media. Before setting his sights on journalism, he spent four years teaching English and traveling in Southeast Asia. He speaks fluent Spanish and very rusty Mandarin. He lives in Northern California with his wife and his hockey sticks.
A typical security camera can give you peace of mind. But that might be offset by the nagging feeling that it’s watching you, too. The best kind of security system is one that protects both your home and your privacy, and the blurams Dome Pro 1080p Security Camera is one rare model that’s set up […]
Sous vide cooking: It sounds fancy, but it’s actually one of the easiest and most reliable ways to cook. It’s the reason why many restaurants are able to put out delicious dishes with a consistent flavor. All you need is the right equipment, and that hasn’t always been available to those outside the resto crowd. […]
The more you use your computer, the more it becomes possible for others to use it too. Where there are anti-virus systems, there are hackers looking for a way to get around them. That’s why it’s important to get software that doesn’t just passively scout for viruses in the background. The folks behind GlassWire have […]