The New York Times reported that the travel guide publishing industry has recently been dominated by "self"-published books created entirely using ChatGPT or similarly cheap and clunky commercial AI writing program. These books are comprised of mostly public domain photographs and the kind of generic tourist information that can be easily scraped off the internet — in other words, the exact opposite of the hyper-local, boots-on-the-ground insider information that people who book travel guides actually want to find out about. The article opens with an anecdote about a woman who forewent the popular Rick Steve's France guide book and opted to save a few bucks by purchasing the "France Travel Guide" by "Mike Steves" instead.
"I was immediately drawn by all the amazing reviews," said Ms. Kolsky, 53, referring to what she saw at that time: universal raves and more than 100 five-star ratings. The guide promised itineraries and recommendations from locals. Its price tag — $16.99, compared with $25.49 for Rick Steves's book on France — also caught Ms. Kolsky's attention. She quickly ordered a paperback copy, printed by Amazon's on-demand service.
When it arrived, Ms. Kolsky was disappointed by its vague descriptions, repetitive text and lack of itineraries. "It seemed like the guy just went on the internet, copied a whole bunch of information from Wikipedia and just pasted it in," she said. She returned it and left a scathing one-star review.
Though she didn't know it at the time, Ms. Kolsky had fallen victim to a new form of travel scam: shoddy guidebooks that appear to be compiled with the help of generative artificial intelligence, self-published and bolstered by sham reviews, that have proliferated in recent months on Amazon.
(Curiously, that book "by Mike Steves" book" is no longer showing up on Amazon searches, so I can't even deceptively link you to it to mooch off the affiliate clicks)
Also this week, Casey Newton, author of the popular Platformer tech newsletter, and The Daily Beast both reported on the story of Jane Friedman, an author who was surprised to find her namesake had been used and her writing style deepfaked to train an AI that wrote a bunch of other shitty books under her name, with the results sold through Amazon's Kindle self-publishing portal as well:
When she investigated, Friedman found several books that sounded like they might have been written by her, as Pilar Melendez reported at the Daily Beast: one was called Publishing Power: Navigating Amazon's Kindle Direct Publishing; another was titled Promote to Prosper: Strategies to Skyrocket Your eBook Sales on Amazon.
To Friedman, it seemed clear that someone was generating books with a chatbot, designed to sound like her, and uploading them to the world's most popular bookseller in hopes of drafting off her reputation.
It appears to have been a low-effort scheme. The spam books didn't rank very high in search results, and there were no accompanying fake reviews that might have enticed buyers. But when she asked Amazon to remove the fake titles, she got caught up in the platform's vast bureaucracy.
Friedman eventually got the fraudulent books taken down. But as she wrote on her own blog:
Whoever's doing this is obviously preying on writers who trust my name and think I've actually written these books. I have not. Most likely they've been generated by AI. (Why do I think this? I've used these AI tools extensively to test how well they can reproduce my knowledge. I also do a lot of vanity prompting, like "What would Jane Friedman say about building author platform?" I've been blogging since 2009—there's a lot of my content publicly available for training AI models. As soon as I read the first pages of these fake books, it was like reading ChatGPT responses I had generated myself.)
To add insult to injury, these sham books are getting added to my official Goodreads profile. A reasonable person might think I control what books are shown on my Goodreads profile, or that I approve them, or at the very least I could have them easily removed. Not so.
I find this scam particularly concerning. How easy is it to find someone who's had a public presence on the Internet for 15 years, and then just "legally" scrape all of that publicly available writing in order to essentially steal their authorial identity? What impact will that have on privacy in the future — both on social media, and beyond?
A New Frontier for Travel Scammers: A.I.-Generated Guidebooks [Seth Kugel and Stephen Hiltner / The New York Times]
Writers are getting deepfaked on Amazon [Casey Newton / Platformer]
Full disclosure: I also write for Wirecutter, which is owned by the New York Times Company, which also publishes The New York Times