The Botometer is a simple single-serving website that reports whether any given Twitter account talks like a bot. It seems quite accurate, tracking not just the content but "sentiment" and its networking characteristics.
My account, @beschizza, has a "green" score of 38%, so I have passed my Twitter Voigt-Kampff test. But @boingboing scores 53%, perhaps reflecting its mix of human chatter and automated links to posts. (Trump also scores 53%, oddly enough.)
How should I interpret a bot score?
Roughly speaking, one can interpret a bot score as a likelihood that the user is a bot. As such, bot scores closer to the extreme values of 0% and 100% are more confident assertions of the account's bot-ness.
It's tempting to set some arbitrary threshold score and consider everything above that number a bot and everything below a human, but this is probably not the best way to think about it. Many accounts score in the "yellow" range of 40-60%. A middle-of-the-road score like this is a signal that our classifier is uncertain about the classification.