Jesselyn Cook noticed that a site called NewsBuzzr had ripped off one of her stories at Huffington Post. It turned out to be some kind of awful plagiarism bot that uses a thesaurus to avoid detection as duplicate content, resulting in hilariously mangled prose. Cook calls it "truly excellent “Florida Male” content" and I hope that term sticks.
The intro to my story, which describes a woman feeling an “urgent tap” on her shoulder, had been changed to say that she felt a “pressing faucet” instead. The term “sex videos” had become “intercourse movies,” and the quote “I was definitely shocked” had morphed into this nonsense: “I used to be indisputably surprised.” The entire piece had been altered, seemingly word-by-word, rendering some sentences far less coherent than others.
Humor aside, the scale of the scam is such that it makes real money, which it is ultimately depriving its victims of. There was a point about a decade ago where the number of sites scraping Boing Boing became uncountable, but sadly none of them turned our hard-driving coverage into magnetic storage delineations.
The screengrab above is from the NewsBuzzr-world's own science educator, "Invoice Nye the Science Man." Google has already banned NewsBuzzr from AdSense. Read the rest
Airhelp is a service that helps airline passengers in 30 countries file claims (for delays, lost bags, overbookings, and cancellations) structured to increase the likelihood of paying out; the bots have made $930m in successful claims to date, and the company behind it only collects a commission when a claim succeeds.
Read the rest
Just days before the horrific mass murder at Pittsburgh's Tree of Life synagogue, my Institute for the Future colleagues Sam Woolley and Katie Joseff published a deeply upsetting study on how social media bots and computational propaganda are being used to instigate and amplify anti-semitism online and manipulate public opinion. From the paper:
Read the rest
This report explores the ways in which online propaganda, harassment and political manipulation are affecting Jewish People in the runup to 2018 U.S. midterm elections. In the course of our research, members of this group have described a marked rise in the number of online attacks their community is experiencing. This is proving especially true during electoral contests and major political events. Correspondingly, our analyses suggests that tools like social media bots, and tactics including doxxing, disinformation, and politically-motivated threats, have been used online during the 2018 midterms to target Jewish Americans. According to interviewees, veiled human users—rather than automated accounts—often deliver the most worrisome and harmful anti-Semitic attacks.
As part of the wider paper series focused on “humanizing the effects of computational propaganda” this empirical work details the ways in which the Jewish socio-religious population in the U.S. is being disproportionately targeted with disinformation and abuse during this crucial political moment. We use a mixed methods approach in this research, deploying both qualitative and quantitative analysis in order to generate both a culturally deep and statistically broad understanding of how computational propaganda is being leveraged against this community...
Analysis of 7,512,594 tweets over a period from August 31, 2018 to September 17, 2018 shows the prevalence of political bots in these efforts and highlights groups within the U.S.
Repairnator is a bot that identifies bugs in open source software integration and creates patches without human intervention, submitting them to the open source project's maintainers under an assumed human identity; it has succeeded in having five of its patches accepted so far.
Read the rest
A (somewhat dubious) survey of 850 business executives for firms of 500 or more employees "with involvement in the decision making process regarding customer experience in their organization" and 4,500 consumers "who have contacted a brand during the last six months with an enquiry or issue to be resolved" found a vast gap between how satisfied the executives believed their customers were and how the customers felt about their interactions.
Read the rest
Quicksilver is a machine-learning tool from AI startup Primer: it used 30,000 Wikipedia entries to create a model that allowed it to identify the characteristics that make a scientist noteworthy enough for encyclopedic inclusion; then it mined the academic search-engine Semantic Scholar to identify the 200,000 scholars in a variety of fields; now it is systematically composing draft Wikipedia entries for scholars on its list who are missing from the encyclopedia.
Read the rest
The always entertaining and inspiring YouTuber Code Bullet decided to build a 2048-playing AI from scratch. It's really interesting to see him also build 2048 as part of the process. Read the rest
Can a "griefbot" help you mourn?
In recent years a few computer scientists have created chatbots of deceased loved ones, by training AIs on transcripts of the deceased's online utterances. There's the case of Roman Mazurenko, a Russian man whose friends created a chatbot based on his texts; there's Muhammad Aurangzeb Ahmad, who similarly constructed a bot of his father so that his children would have some sense of what it was like to talk to him.
It's a form of mourning and remembrance that's quintessentially modern, and raises interesting questions about what the shape of grief will look like in the years to come. These experiments in griefbots thus far have all been bespoke, but I doubt it'll be long before we see one-click bot-creation – where you feed a service the various screen names and accounts of the deceased, and it's all autoscraped and assembled quickly into something you can chat with.
But what's the emotional impact of talking to a chatbot version of someone when you know it's just a bot? My friend Evan Selinger is a philosopher who writes frequently and thoughtfully on the moral implications of tech, and in a recent essay he suggests an intriguing parallel: The "empty chair" technique ...
Read the rest
The empty chair technique that I’m referring to was popularized by Friedrich Perls (more widely known as Fritz Perls), a founder of Gestalt therapy. The basic setup looks like this: Two chairs are placed near each other; a psychotherapy patient sits in one chair and talks to the other, unoccupied chair.
The DoNotPay bot (previously) is a versatile consumer advocacy chatbot created by UK-born Stanford computer science undergrad Joshua Browder, with its origins in a bot to beat malformed and improper traffic tickets, helping its users step through the process of finding ways to invalidate the tickets and saving its users millions in the process.
Read the rest
On January 17, 2017, Girl 4 Trump USA joined Twitter. She was silent for a week, but on January 24, she suddenly got busy, posting an average of 1,289 tweets a day, many of which were in support of U.S. President Donald Trump. By the time Twitter figured out that Girl 4 Trump USA was a bot, “she” had tweeted 34,800 times. Twitter deleted the account, along with a large number of other Twitter bots with “MAGA,” “deplorable,” and “trump” in the handle and avatar images of young women in bikinis or halter tops, all posting the same headlines from sources like the Kremlin broadcaster RT. But Twitter can’t stop the flood of bots on its platform, and the botmakers are getting smarter about escaping detection.
What’s going on? That’s what Sam Woolley is finding out. Woolley, who recently joined Institute for the Future as a Research Director, was the Director of Research at the Computational Propaganda Project at Oxford University. In this episode of For Future Reference, we asked Sam to share highlights of his research showing how political botnets — what he calls computational propaganda — are being used to influence public opinion.
Listen to the podcast interview with Sam Woolley here. Subscribe to the IFTF podcast on iTunes | RSS | Download MP3 Read the rest
I woke up this morning thinking the world needed a Prince Vultan tweet-bot.
You know he'll say it sooner or later...
Jack Burton is still talking to whomever is listening, and Sherman T. Botter contiues to share Potterisms. Read the rest
Spamnesty is a simple service: forward your spam to it and it will engage the spammer in pointless chatbot email chains, wasting their time.
If you get a spam email, simply forward it to firstname.lastname@example.org, and Spamnesty will strip your email address, pretend it's a real person and reply to the email. Just remember to strip out any personal information from the body of the email, as it will be used so the reply looks more legitimate. That way, the spammer will start talking to a bot, and hopefully waste some time there instead of spending it on a real victim. Meanwhile, Spamnesty will send you an email with a link to the conversation, so you can watch it unfold live!
The conversations are indeed posted live, and some are quite funny. It's fascinating how obvious it is when a spammer switches from their own bot to giving a human response, and satisfying to see them fooled.
Have you met Lenny? Read the rest
After a series of scandals and rumors, the Hall of Presidents at Walt Disney World has reopened with its Donald Trump robot, which "features the latest advances in technology that enable smoother and more lifelike movements" and "personally recorded remarks exclusively for The Hall of Presidents."
Read the rest
Mark writes, "Nothing marks the holidays like the predictability of a formulaic chestnut featuring '90s stars, magical religious holidays, SFW romance, good hair, and reliable stable camera work. For all those who need a bit more than TV can deliver, my kids and I created the Hallmark Holiday Movie Bot, which generates one feel-good Chrismukkah hit after another for your seasonal celebration!"
Read the rest
The Botometer is a simple single-serving website that reports whether any given Twitter account talks like a bot. It seems quite accurate, tracking not just the content but "sentiment" and its networking characteristics.
My account, @beschizza, has a "green" score of 38%, so I have passed my Twitter Voigt-Kampff test. But @boingboing scores 53%, perhaps reflecting its mix of human chatter and automated links to posts. (Trump also scores 53%, oddly enough.)
Read the rest
How should I interpret a bot score?
Roughly speaking, one can interpret a bot score as a likelihood that the user is a bot. As such, bot scores closer to the extreme values of 0% and 100% are more confident assertions of the account's bot-ness.
It's tempting to set some arbitrary threshold score and consider everything above that number a bot and everything below a human, but this is probably not the best way to think about it. Many accounts score in the "yellow" range of 40-60%. A middle-of-the-road score like this is a signal that our classifier is uncertain about the classification.
In The Guardian, my Institute for the Future colleagues Marina Gorbis and Sam Woolley write about social bots as a threat to democracy:
Social media platforms may be able to track and report on political advertisements from foreign entities, but will they divulge information on pervasive and personalized advertising from their domestic political clients?
This is a pressing question, because social bots are likely to continue to grow in sophistication. At a recent roundtable on the Future of AI and Democracy, several technology experts forecast that bots will become even more persuasive, more emotional and more personalized.
They will be able to not just spread information, but to truly converse and persuade their human interlocutors in order to even more effectively push the latter’s emotional buttons.
Bring together advances in neuroscience, the ability to analyze massive amounts of behavioral data and the proliferation of sensors and connectivity and you have a powerful recipe for affecting society though computational means.
"Social media bots threaten democracy. But we are not helpless" (The Guardian) Read the rest