Check out this super creepy AI creation, called "Loab." She was created by accident, but keeps appearing online, often in really scary and violent depictions and scenes. Smithsonian Magazine succinctly explains Loab:
Earlier this month, Twitter user Supercomposite posted a thread of spooky images featuring a woman she calls "Loab," who usually has red cheeks and dark, hollow eyes. Since then, the images, which range from unsettling to grotesque, have gone viral.
The images of Loab all come from an artificial intelligence (A.I.) art tool. These tools, like DALL-E 2, create images based on text prompts users input into the platform—and they are having a cultural moment as of late. Just last month, a piece of A.I.-created art won the Colorado State Fair art competition. Plenty of artists are experimenting with such tools to merge art with technology and create new, avant-garde pieces.
I also just read this fascinating deep dive into the promise and pitfalls of Artificial Intelligence by Ange Lavoipierre, writing for ABC News (Australia), which tells the story of Loab. ABC News explains:
Loab (pronounced "lobe") was first discovered in April this year by 31-year-old artist Steph Swanson, known online as Supercomposite.
Steph was at home in Uppsala, Sweden, experimenting with one of the many game-changing AI image generation tools which are now publicly available.
These tools produce original images that are based on the description you type in.
That day, she was using negative prompt weights, a technique which produces the theoretical opposite of whatever you ask for.
Steph's request for the opposite of Marlon Brando produced a business logo.
But when she asked the AI for the opposite of the description of the logo, something unexpected happened.
"I got four images of the same woman," she says.
Steph had never seen the AI behave in such a way before.
"If you use negative prompts … a lot of times it's really varied. So it was really unusual to get a bunch of images of what was recognisably the same woman," she says.
"Even if you describe a person in a positive prompt … you get people that match that description, but you don't get literally the same person.
"I immediately recognised this is an anomaly."
She repeated the experiment straight away, to test whether it was a fluke – it wasn't.
"As I ran this prompt more and more and kept getting [Loab], it was like, 'Oh, this is like the only thing that this prompt makes, this woman.'"
The woman in the image was always sad, sometimes wet-cheeked like she'd been crying, with her mouth half open "like she's sobbing", says Steph.
Once, she appeared next to some garbled text spelling "Loab", and the name stuck.
Stranger still, Loab always appeared in the same location: a house with brownish-green walls, alongside cardboard boxes, junk, and the occasional stuffed toy.
The rest of the article describes the Loab saga in more detail, and dives into the sometimes fascinating, sometimes disturbing world of AI—a sector that, according to Ange Lavoipierre, is "having a breakthrough moment, fuelled by hype, venture capital, and a decade of generous research funding." But the Loab phenomenon, Lavoipierre argues, "exposes just how little we understand about AI." Is it true that "AI itself is advising caution"? And if so, what are the warnings and will anyone heed them?