The future of trollbots

Hugh writes, "In a post-Tay world, where we've proved the Internet can train a bot to be a plausible shitposter, what's the future of politics, hate, and mob rule?

1: Everyone Can Have Their Own Twitter Mob

Right now, if you want to have someone attacked by a horde of angry strangers, you need to be a celebrity. That's a real problem on Twitter and Facebook both, with a few users in particular becoming well-known for abusing their power to send their fans after people with whom they disagree.

But remember, the Internet's about democratising power, and this is the latest frontier. With a trollbot and some planning, this power will soon be accessible to anyone.

There's a further twist, too: the bots will get better. Attacking someone on the Internet is a task eminently suited to deep learning. Give the bots a large corpus of starter insults and a win condition, and let them do what trolls do – find the most effective, most unpleasant ways to attack someone online.

No matter how impervious you think you are to abuse, a swarm of learning robots can probably find your weak spot.

On a milder but no less effective note, even a single bot can have a devastating effect if handled carefully.

The rule of Internet debate is that, all else being equal, the poster with the most available time wins.

On its own, a bot probably can't argue convincingly enough to replace a human in, say, a Reddit thread on gender politics. But can it be used to produce some posts, bulk out rough comments, make short comments requiring longer answers, or otherwise increase the perceived available time of a poster tenfold?

Fear the automated sealion.

Rise Of The Trollbot

[Hugh Hancock/Antipope]