Boing Boing

Botcheck.me: a plugin that predicts whether a Twitter user is a bot

Ash Bhat and Rohan Phadte are 20 year old UC Berkeley students who turned a machine learning class assignment into a browser plugin that tries to guess whether a given Twitter profile is associated with a bot or a human, and assigns a probability score that takes into account the possibility that a bot has human pilots who can take over at key junctures.


They've published a pair of papers giving a high-level overview of their methodology, but I can't find their training dataset or a detailed paper of the sort that would allow for technical scrutiny of their assumptions.

That said, I installed the plugin this morning and spent 20 minutes trying out profiles I knew to be bots and knew to be non-bot and it didn't make any mistakes, but this is a very unscientific test.

The pair describe their work as doing what Twitter won't, helping Twitter users distinguish propaganda bots from real people, and they make the good point that many possible deficiencies in their approach are the result of their not having full access to all of Twitter's data, which might let the company refine their work to make it more accurate.


When Swamp Hammer refused to talk on the phone, I ducked into DMs to chat further. When I asked why, they provided the answer to a question I hadn't asked: "I TRUST NO ONE AFTER LEARNING THE GOVT CAN'T BE TRUSTED & MEDIA IS A CLOWN ACT!" When I relayed Swamp Hammer's signs of life to Bhat, he said it was possible the account was run by a human, but the ability to DM doesn't prove it. It could easily be one of many accounts using a tool like TweetDeck to allow a human behind the bot to spring to life when summoned by a DM. Indeed, the Indiana University researchers opted for a zero to 100 probability scale for their Botometer because bot activity often isn't a binary "bot" or "not bot" distinction. "Botness is a spectrum," explains Clayton Davis. "Someone could be a human, but use a scheduler to post tweets, so they're kind of a bot."

Bhat and Phadte prefer the clarity of labeling an account one or the other. If someone disagrees, they can look in aggregate at where their model is messing up, and improve the accuracy of the classifier. "Those angry users end up being valuable because if they weren't angry and vocal, your model would never learn," Bhat says.

Tell that to the people accused of bothood. Since an Indiana University lab launched a Botometer to the public in 2014 for the same purpose, hundreds of people have written the researchers up in arms about being wrongly classified. "Some people take it really personally," says Davis, writing things like, "I can't believe you called me a bot! I've been on Twitter since before you were born!" Meanwhile, Gonzalez, Bhat's Berkeley professor, expects a different response if Botcheck.me's model is off. If it's wrong, he says, "their extension wouldn't do very well. People would kind of reject it." Maybe it's better to be hated than ignored.


THE COLLEGE KIDS DOING WHAT TWITTER WON'T
[Lauren Smiley/Wired]

Exit mobile version