• Woman loses ability to feel hunger after a stroke Photo of open fridge filled with food

    A 28-year-old Canadian woman had a stroke in a part of the brain known as the insular cortex—and after she recovered, she found she'd lost any sense of hunger.

    She lost over 10 kilograms because she'd forget to eat. As this piece in Revyuh notes, when doctors examined her…

    … they found that even with a lack of calories, the woman did not feel any physiological signals that it was time for her to eat, for example, a rumbling in her stomach. Despite the fact that the patient did not have problems with the sense of taste, smell and texture of food (only in the first two months she complained of a metallic aftertaste after eating any food), even her favorite foods and products, for example, chocolate.

    There's a paper in Neurocase describing her experience, and a paywalled story in New Scientist notes that the brain region where her stroke occurred (sometimes just called "the insula") is still a mystery to science …

    The insula is one of the least understood parts of the brain because it is tucked deep inside the folds of this organ. It appears to have a diverse set of functions, involved in consciousness, empathy and pain. But there is growing evidence that it also helps to process signals from different parts of the body in order to assess our internal bodily state – for example, whether we are hungry or full, warm or cold, or tired or rested.

    If the insula senses that something is out of balance – our blood sugar levels are too low, say – it tries to amend this. For example, it may work with other parts of the brain to create a feeling of hunger that encourages eating, says Yoav Livneh at the Weizmann Institute of Science in Israel.

    "In this woman's case, her brain would still have been receiving signals that she was missing calories, but because of the damage to her insula she wouldn't have been aware of them," he says.

    As the scientists pondering this rare case note, it's possible — knowing now the role the insula has in sensations of hunger — that one could design a drug that targets its functioning and artificially reduces hunger, for weight loss. The problem is that because the insular seems involved in lots of other sensations and bodily self-regulation, you might cause new and weirder problems if you mucked with the insula's functioning.

    More evidence of a what a crazy mystery the brain is.

    (Public domain photo of fridge courtesy the US Department of Agriculture)

  • "Smart toilets" that monitor poop, pee and your "analprint" Photo of a toilet with the lid up

    Since a) microchips and sensors are rapidly approaching the dimensions and cheapness of grey goo, and since b) tech firms are eager to collect data on basically anything about us, it probably shouldn't be surprising that tech firms are making smart toilets now.

    As is usual with self-monitoring tools, the concept is in theory quite promising: Poop and pee contain all manner of extremely useful info about our health, so analyzing them regularly could provide people with advice and early detection of disease.

    But as is also usual with commericalized self-monitoring tools, instead of this being data that you have autonomous control over — i.e. that's used by you, for you — it'll almost certainly be hoarded by corporations and used against your interests, when it isn't being leaked all over the place by clownishly terrible security practices.

    Oh, and apparently one's "analprint" is scannable and unique, so large commercial interests — and their friends in police and spy agencies — will be able to know precisely where and when you pooped! (Previously.)

    Good times, good times.

    Emine Saner has a terrific story on all this emerging marketplace in The Guardian; a taste:

    Many people "wouldn't, for very good reasons, like cameras pointing up their bottoms", says Phil Booth, the coordinator of MedConfidential, which campaigns for the confidentiality of medical records. That said, under the guidance of a medical professional, "there are not necessarily inherent privacy risks" in using a smart toilet as a medical device, he says. However, it might get interesting if the data created by general consumer use was owned by a company: "You may trust that particular company, but every company is pretty much buyable by Google or Facebook or Amazon. Then, what I thought was something for my own health monitoring has become fodder to business models I really know nothing about." [break]

    Information from stool and urine samples could provide all sorts of information – your risk of disease, your diet, your exercise level; how much alcohol you drink and whether you take drugs. Even tracking something as trivial as the time of day you use the loo – regularly in the night, for instance, indicating sleeplessness – could reveal conditions such as depression or anxiety.

    (Toilet photo via Pixabay)

  • Detecting fake AI-generated faces by looking for irregular pupils Picture of two images of faces (young girls), one real and one synthetic, with an analysis showing the real faces' has eyes with pupils that are round, and the fake one has pupils that are irregularly shaped

    It used to be that if you wanted to create a fake persona online, you swiped a photo from some hapless person's Facebook page. These days, though, fakers can create "synthetic" images — using "general adversarial network" AI to generate fake faces that look awfully real. (Previously.)

    A group of academics has found an interesting flaw in today's GAN technology, though: If you look closely at the pupils, they have irregular shapes. A real human has a round or elliptical pupil; a GAN-generated synthetic photo is sort of blobular.

    The paper describing their analysis is here; and as Discover Magazine writes:

    "Real human pupils have strong elliptical shapes," say the team. "However, the artifacts of irregular pupil shapes lead to significantly lower scores."

    This is the result of the way that generative adversarial networks work, with no inherent knowledge of the structure of human faces. "This phenomenon is caused by the lack of physiological constraints in the GAN models," say Guo and co.

    Of course, now that this finding is public, it probably won't be hard for fakers to photoshop in a round or elliptical pupil.

  • Improving tiny urban greenspaces causes huge boosts in insect life Flowers along the street in Tel Aviv

    A group of scientists did a fascinating experiment: They proved that adding more plants to even tiny, mangy little urban greenspaces can massively boost their biodiversity.

    In 2016 they honed in on a small, 200-square-meter greenspace in downtown Melbourne — a place that is "adjacent to a major road, surrounded by large buildings, and embedded in a dense urban matrix." It wasn't exactly a verdant paradise; in fact it only had two gum trees.

    They added twelve more indigenous plant species, then monitored what happened over the next four years.

    The insect diversity exploded: In a year there were five times as many insect species, and two years after that, there were seven times as many as at the beginning. Better yet, most of the insects were indigenous, so it wasn't a case of the new plants attracting invasive species. (Their paper on the experiment is here.)

    The upshot is that we clearly ought to be adding far more locally-appropriate plant-life to urban greenspaces. Given that insect populations are globally — and disastrously — on the decline, this is one awfully easy and probably not-that-expensive way to help out, at scale, around the world.

    As the lead author told the New Scientist (paywalled article, but worth quoting):

    "I can't think of any drawbacks," says Mata. "On the contrary, the indigenous plant species require less water and don't require fertilisers." Now they have grown to cover most of the ground, there is also no more need for weed control, he says.

    "This report demonstrates the ability of healthy plant and fungi communities to provide the building blocks for ecosystems abounding with biodiversity," says Ian Dunn, head of UK conservation charity Plantlife, which has been campaigning to boost wild flowers and wildlife simply by encouraging individuals and city officials to mow lawns, parks and road verges less often, or to mow at better times.

    (CC-2.0-licensed photo of street flowers in Tel Aviv via RG in TLV's Flickr feed)

  • Mario animated using the supercooled atoms in a quantum computer Mario created using qubits, in an MIT-Harvard quantum computer

    Physicists with the Harvard-MIT Center for Ultracold Atoms have just announced new success with a particular style of quantum computer—a "programmable quantum simulator". In this architecture, they take supercold rubidium atoms and use optical tweezers (beams of light) to arrange the atoms into shapes.

    As the Harvard Gazette writes …

    This new system allows the atoms to be assembled in two-dimensional arrays of optical tweezers. This increases the achievable system size from 51 to 256 qubits. Using the tweezers, researchers can arrange the atoms in defect-free patterns and create programmable shapes like square, honeycomb, or triangular lattices to engineer different interactions between the qubits.

    But since they're a bunch of nerds raised on Super Mario games, the physicists used the grid to create a few dozen frames of Mario running and stomping on a Goomba. Le voila …

    Quantum nerd culture: I'm down! Their paper describing the architecture is here; the animation is included in that Harvard Gazette post.

    (Thanks to Jack for this one!)

  • Police use Tesla's autopilot to stop the car after drunk driver passes out Screenshot of ABC 7 news segment showing a tesla on autopilot mode while the drunk driver is passed out

    Tesla's autopilot feature has been in the news lately—propelled by stories about it causing crashes, sometimes fatal, when drivers rely on it too heavily.

    Now comes a tale of police using autopilot to prevent what otherwise would very likely have been fatal accident.

    A woman was driving drunk in her Tesla and passed out; the car, under its autopilot mode, kept going down the highway. The woman's husband was apparently driving behind her and called the police. They showed up and stopped the Tesla by parking in the highway. The autopilot detected the possible collision and, this time, worked perfectly, slowing the Tesla to a halt.

    The police radio chatter, which you can listen to on ABC 7, is like something out of an early cyberpunk novel:

    In radio transmissions of the incident that were later obtained by ABC7, a dispatcher is heard saying: "The reporting party is advising that his wife is unconscious in a Tesla. The vehicle is driving itself.

    "It ended up TC'ing into the right shoulder wall, and, uh, still continuing," the dispatcher says, using shorthand for a traffic collision.

    The dispatcher is heard saying the suspect's husband was following the Tesla in a Volkswagen.

    "Again, apparently the wife is unconscious in the vehicle," the dispatcher adds.

  • AI draws movie posters based on textual descriptions of films AI generate vision of what

    Noah Veltman took textual descriptions of several famous movies, then fed them to an AI — to have it generate a graphical version of what the text describes.

    The result: AI movie posters.

    That one above? Ocean's Eleven.

    They're pretty trippy to look at! Veltman cleverly made his experiment into a game: On his page, he shows you several dozen of the images and you can try to guess which movie the AI is drawing, before clicking on the caption to reveal the correct name.

    I managed to guess a few — including Star Wars and The Wizard of Oz. But a lot of them were pretty cryptic: I had no idea I was looking at Ferris Bueller's Day Off.

    If you want a deeper explanation of the type of AI Veltman used — dubbed "VQGAN+CLIP" — you can read about it here.

    (tl;dr: it's a type of GAN that mixes convolutional neural networks with transformers — the latter more typically used in language — and smooshes it together with a model that matches captions to images.)

  • A web tool that converts PDF scientific papers into HTML A screenshot of a paper rendered by the app Paper to HTML

    The folks at the Allen Institute for Artificial Intelligence just released an intriguing tool — "Paper to HTML", which lets you upload a scientific paper and it turns it into an HTML web page.

    The goal, as they wrote in their email, is to improve accessibility: Screen-readers and accessibility tech usually finds it a lot easier to parse HTML than PDFs:

    This week, a team of researchers and engineers led by Lucy Lu Wang released a prototype of their tool that converts scientific PDFs into HTML, making them readable by screen readers and much more easily visible on mobile devices. After learning that fewer than 3% of scientific papers meet minimum criteria for accessibility, AI2 is pursuing new and better ways to make scientific publishing accessible to the broadest possible audience

    I uploaded a scientific paper I was recently reading and damn, the tool did a bang-up job. That's a screenshot of the HTML generated above.

    This will also make it easier for me to clip stuff from PDFs. Right now, most of the time when I cut and paste from a PDF paper into, say, Google Docs or Word, the text is all chopped up with line-breaks. But this tool renders all the text as single <p>'s of HTML, which cut and paste as a complete bolus of text. I'm in!

  • On the joys of pairing champagne with McDonalds' Mcdonalds' email

    Over at Lifehacker, Claire Lower makes a surprisingly compelling case for pairing champagne with fast food:

    I've talked about Champagne's ability to make fatty food sing before, with a focus on Thanksgiving, and the qualities that make Champagne so good with turkey and mashed potatoes are the same qualities that make it absolutely stunning with fast food. Its bright, bubbly, acidic nature makes it the perfect foil for fat and salt, the two ingredients that make fast food appealing in the first place. Chicken nuggets, cheeseburgers, pizza, and any other kind of fatty, savory takeout all benefit from the dry wine's laser-sharp, palate-cleansing tartness. (It prevents your tongue from becoming saturated, allowing you to eat more.)

    I'm not saying you have to break out the Veuve Clicquot every time you order a Quarter Pounder, but you should try the combination at least once, perhaps on your birthday, New Year's Eve, or the birth of a child. Capital-C Champagne is fiscally impractical for everyday consumption, but any super dry sparkling wine will work. I am partial to the Trader Joe's Blanc de Blancs, which is from "somewhere in France," incredibly dry, and way better than it has any right to be at that price point (six whole American dollars).

    I'm gonna try this on the weekend. I don't regularly drink champagne — I actually can't remember the last time I had it. But I do eat McDonald's with some frequency (last weekend, during a short road trip). I love this idea.

    (CC-2.0-licensed photo via the Flickr feed of McDonalds' itself)

  • "Free Blockbuster" boxes let you drop off, or take, a VHS movie Photo of

    "Free Blockbuster" is like a chain of "little free libraries", except it circulates that noble medium of the home-video era: The VHS tape. You can drop one off or take one.

    The movement was started in 2018 by Brian Morrison, who noticed all the abandoned newspaper boxes around and wondered if they could be put to a new use. According to this paywalled Wall Street Journal piece, there are now 69 Free Blockbuster boxes scattered around the US, Canada and Australia.

    If you want to join the movement, you can find a box using the map at FreeBlockbuster.org; at the site you can also buy stickers or stencils to adorn a box of your own ("found a franchise," as they say on the site, and add it to the map), or even buy a full-on painted, premade box for $199.

    Judging by the map, the big hotspots for boxes are Portland, LA, Philadelphia, Minneapolis, Detroit and Cincinnati. Not much action out here in NYC where I live; maybe I'll have to set one up!

    From a piece this spring in Thrillist:

    Free Blockbuster was quickly embraced by the VHS-adoring community on social media. In March of 2019, Se7ven screenwriter Andrew Kevin Walker tweeted in support of the project and dropped off a couple of tapes of his own in a Los Angeles box. Morrison still considers March 28, the day of the tweet, to be a Free Blockbuster holiday. "That was a series of things that went from being like, an insane thing that a crazy person was doing on his own to being a collective," Morrison says.

    Along with fostering a community of physical media fans, Morrison hopes that Free Blockbuster can reinvigorate the sense of discovery that has been lost in the algorithmic streaming era. Each photo of a Free Blockbuster box highlights the vast range of media it inspires: one box in Philadelphia featured the entire series of Entourage; one in Los Feliz had three copies of Babe on VHS; another in Virginia put 8 MileBurn After Reading, and Zombieland together. Many locations also include bags of popcorn or movie theater candy to pair with your selection.

    "I've seen surreal… really weird stuff in these things," Morrison says. "My favorite is when people drop off movies they taped off a TV that still have the commercials in there. That makes me so happy."

    (Photo of the Free Blockbuster box via their Instagram feed)

  • iRobot built over 100 models of dog poop to train Roomba's visual AI Screenshot from irobot video about their poop-detection AI

    Apparently Roomba owners who also own dogs have long encountered a problem: If the dog poops on the floor, their helpful robot vacuum rides right over it and smears it across the living room.

    So iRobot decided to build a visual-detection model to help Roombas recognize, and avoid, dog poop.

    According to iRobot's promo video, and this story in Techcrunch, this involved building over 100 physical models of dog poop and sending employees home to photograph them on their houses' floors:

    "The glorious career of roboticists may not have been fully realized when we were sending people home and creating hundreds of models of poo," CEO Colin Angle recently told me. "Sending people around to photograph and create synthetic models of poo. I don't know how many tens of thousands of images of all different shapes and sizes of synthetic images were required, but this is not demo code, clearly." [snip]

    "You imagine it, we probably attempted it to grow a large enough database with both real images, images of fake poo and synthetic images that were manufactured of poop to serve as a training model for our robot," Angle adds.

    All of this leads us to Pet Owner Official Promise (P.O.O.P.), which guarantees a free return for the new j7+ if the Roomba runs into (and over) a poop problem. For the time being, however, iRobot is strictly adhering to the old adage about letting yellow mellow. "We can't do pee," says Angle. "It has to have some 3D aspects to it."

  • A laser camera that can see an entire room through a keyhole Non-line-of-sight imaging pictures from a Stanford study

    "Non-line-of-sight imaging" is the technique of shining light in a direction and measuring all the ways it bounces off objects to determine their rough shape — including ones the person holding the camera can't see. That means that, historically, they've been able to use NLOS imaging to, say, detect an object hidden around a corner in a hallway.

    Now some Stanford researchers have refined NLOS to do a new trick: They shine the laser through a keyhole and get back enough information to infer the rough shape of objects all throughout the room.

    As Gizmodo writes …

    The research could one day provide a way for police or the military to assess the risks of entering a room before actually breaking down the door and storming their way inside, using nothing but a small crack in the wall or a gap around a window or doorway. The new technique could also provide new techniques for autonomous navigation systems to spot hidden hazards long before they become a threat in situations where the previous NLOS techniques weren't practical given the environment.

    So, it's both technically fascinating and totally creepy, two elements that seem present in a surprising amount of engineering innovations these days.

  • The brutal physics of the "milk crate challenge" Screenshot of video of a woman climbing milk crates in the

    The "milk crate challenge" erupted a few weeks ago on Tik Tok; it consists of people attempting to ascend and descend a pyramidal staircase made of milk crates. Behold …

    As with various former loopy social-media challenges — such as the infamous "Tide Pod Challenge" — people have been injuring the crap out of themselves, doctors are wearily urging people not to mimic the incredibly stupid things they see others do online, and nobody is heeding the warnings. This is why packet-switched data was invented, apparently.

    Nonetheless! If you want to understand precisely why the milk crate challenge is so particularly hard to beat, here's a Wired video with engineer and former NASA researcher Nehemiah Mabry breaking down the physics.

    The most interesting thing to me is that people most often fall when they attempt the first step downwards. In the video, Mabry talks about why that is …

    Going up, you know, you're basically, you can rest your weight, right? But when you're stepping down, you actually have to control the placement of your weight a little bit more than you had to going up. And when it comes to column, the placement of your weight really, really matters.

    Ideally, in a perfect case scenario, a person would be able to set their weight right down the center of that crate. And it literally has what we call pure compression, so that's all that its experiencing.

    But really, if you happen to put your weight off to the side, now you're creating like a rotational moment. Well, the problem is that because these are separate crates, it's not like there's a continuation of the member of the column to actually push off of. You're really depending on your own muscle control to try to get it back stable, which is extremely tough.

  • A word processor that improvises keyboard jazz while you type A screenshot of a message typed in

    "Jazzkeys" was created by Plan8, a design firm that does music/sound for brands, film and interactive what-have-yous.

    It's a word processor that plays music as you type — each keypress is translated into a piano note, which the software strings together in a vague improvisation, in realtime.

    When you're done typing a message, you can click the little arrow to the far right and generate a link that you can send to someone; it'll play the message back, showing each keypress one by one, with the same speed and cadence that you originally typed it with. (Here's me typing that message you see above.)

    It would obviously drive one batty to type, like, a novel using this sort of thing — but hey, if someone wants to do it and send me the link, I'll watch it!

    Generally, though, I highly approve of this. I really dig funky and playful experiments with word-processing — a field that otherwise has seen shockingly little innovation in decades.

  • Animals are "shapeshifting" to cope with climate change Image of bird-beak thermoregulation

    A new scientific paper finds that warm-blooded animals are evolving larger beaks, legs, ears and tails, as they try to adapt to a warmer planet.

    "Appendages have an important, but often undervalued, role in animal thermoregulation", as the authors write — in birds, for example, beaks are used to expel heat. If a species of bird is persistently overheated, then over several generations it'll gradually evolve larger beaks. Same goes for some tails and legs and ears in other animals.

    Global warming is thus driving animals to "shapeshifting", as the Guardian reports …

    Examples include several species of Australian parrot that have shown a 4-10% increase in bill size since 1871, positively correlated with the summer temperature each year.

    Meanwhile, research on the North American dark-eyed juncos, a type of small songbird, showed a link between increased bill size and short-term temperature extremes in cold environments.

    Researchers have also reported tail length increases in wood mice, and tail and leg size increases in masked shrews. Bats in warm climates were shown to have increased wing size.

    The original paper is here; press release, le voila.

  • Malfunctioning rocket takes off sideways

    When the US firm Astra tried to launch its latest rocket up to low-earth orbit, one of the engines failed on the launch pad — so instead of going up, the rocket remained upright but shuffled off sideways, almost like it was slinking out of the room.

    After a few seconds, it started going up — but was still moving sideways, so it ascended diagonally, though still in an upright position.

    It eventually began heading straight up, but it didn't have enough power to get high enough, and Astra scotched the flight. As the Register reports …

    "After approximately two minutes and thirty seconds of flight, the range issued an all engine-shutdown command, ending the flight," reads a company statement. "The vehicle achieved an altitude of approximately 50 kilometers, before safely returning to Earth."

    It's a quite amazing sight. Things begin around 1:33:30 in that video, and the CEO tweeted another view of the launch here …

  • McDonald's manager explains that Grimace is "a taste bud" Photo of Grimace on city street, by Danielle Scott

    Growing up in the 70s and early 80s, I was morbidly fascinated by Grimace, the McDonaldland character. I'm old enough to remember when Grimace was a) an evil character who stole milkshakes, and b) possessed of two sets of arms. (To this day, I still mentally draw him—her? it?—with four arms.)

    But it was never clear what species, what variety of thing, Grimace was. McDonald's was coy, and never explained! Given that it as the peak of the cold war, my friends and I figured Grimace was sort of deeply irradiated mutant.

    This weekend, though, a manager from a Canadian McDonald's unveiled the mystery. As the Huffington Post writes …

    Brian Bates, who has worked for the chain for 10 years, casually dropped it into an interview with CBC News over the weekend. 

    "He is an enormous taste bud, but a taste bud nonetheless," Bates told the network in an "unofficial" explanation, adding that the character was meant to show that the food tastes good. 

    Given what taste buds look like, this kind of … makes sense!

    On Twitter, @MilesKlee made a deeply unsettling observation …

    (Photo of Grimace via Danielle Scott's Flickr account; Taste bud photo via the University of Mexico)

  • GPT-3 is able to accurately predict a lot about COVID-19 Coronavirus image

    GTP-3, the deep-learning language model created by OpenAI that's eerily good at writing human-sounding text, was trained on data that ended in October 2019. That means it hasn't been trained on any text about COVID-19; it doesn't "know" anything specifically about this novel coronavirus.

    So Thomas Smith, founder of the AI firm Gado Images, decided to get GPT-3 to talk about COVID-19 … and find out what it'd say. He fed it several prompts describing the coronavirus to see how it would autocomplete those utterances.

    The result? A mixed bag. It predicted, accurately, the basic aspects of the disease, i.e. that it spreads through the air and is worse in people with asthma or diabetes. But it didn't predict the more complex social and political aspects of the disease. It said, for example, that "Caucasians and Asians" would be most affected by COVID-19 — so it failed to replicate how longstanding medical and economic biases made Black Americans fare worse. When Smith asked it if Americans would be willing to wear masks to stop the spread, GPT-3 replied that "People in the United States will be willing to wear masks to stop the virus from spreading. The virus is spread through the air, and masks will help prevent the spread of the virus."

    Basically, it correctly predicted the facts of the matter — how the virus works — but it predicted how all Americans rationally ought to have behaved, instead of how they actually have.

    However, GPT-3 was astonishingly good in a few areas. It predicted that vaccine would be ready by the fall of 2020, which is pretty correct and not something many experts accurately foresaw.

    Even more interestingly, when fed a description of the virus' structure, it predicted mutations that seem close to how Delta works — and then it went on to predict an even scarier possible future mutation:

    The system's predictions about Covid-19 variants were surprisingly accurate, too. To prepare GPT-3 for scientific questions about variants, I first handed it a detailed scientific description of the virus' physical structure. I then gave it versions of the prompt "If the virus mutates, expected sites of mutation which would increase virulence include". The system completed my sentence with the text "erythrocyte binding site and the furin cleavage site."

    That shocked me. According to Natureboth the highly contagious Delta variant of Covid-19 and the Alpha variant "have altered furin cleavage sites", and this alteration is thought to make the variants "even better at transmitting" than the original virus. GPT-3's statement about furin binding sites appear to line up almost perfectly with the science. Given only a basic description of the virus' structure, GPT-3 essentially successfully predicted the Delta variant.

    Even more interesting is the fact that in implicating the "erythrocyte binding site," GPT-3 may be dreaming up a totally new kind of Covid-19 variant. Erythrocytes are cells found in the blood. Although Covid-19 isn't considered a bloodborne virus, it does have major impacts on blood cells, and some evidence suggests that it infects them directly. If the virus mutated to infect blood cells more efficiently and travel through the blood, GPT-3 seems to suggest, this would make it way more virulent than it is today.

    As Smith notes, there are enough errors in GPT-3's predictions — and so much blackboxery in these massive predictive-language models — that you wouldn't want to rely on it for medical advice. It's just a pattern-recognition machine.

    But because it's doing pattern-recognition that is alien to human ways of making sense, and at a scale impossible for humans to achieve, it could be useful as a pointer — suggesting things human doctors should investigate. That's the productive "centaur" human-machine synthesis that often characterizes our best uses of computation.

    Damn interesting stuff, either way.

    (CC-2.0-licensed coronavirus image via Yuri Samoilov's Flickr feed)

  • Plastic surgeons say the pandemic has caused "Zoom dysmorphia" Photo of phone with Zoom app on it

    It seems that all the time we've spent staring at ourselves in Zoom during the pandemic is affecting our perception of ourselves.

    When plastic surgeons began reopening for business in 2021, they found an increased demand from people newly worried about the shape of their noses, sagging skin in their necks, and the tone and color of their skin. A lot of the patients said they'd become dissatisfied with their looks after months of seeing their face in videoconferencing.

    So, "Zoom dyphoria" is here, as Wired UK reports.

    As that story notes, plastic surgeons had since 2015-ish been grappling with "Snapchat dysphoria" — or, people who wanted their real-life faces to look the way they did in Snapchat filters, with huge eyes and sparking skin.

    But the effects of videoconferencing on our self-image are different and possibly more widespread, because Zoom and its genre of apps are more incessantly used: A white-collar professional might spend hours a day in videochat. And psychologists have known for decades that people who stare at themselves in a mirror become more self-conscious.

    What's worse, videoconferencing apps and laptop/smartphone cameras frequently create facial distortions we're unaware of — so we think we're seeing an unbiased, accurate reflection of our faces, but we're not.

    From Wired:

    Front-facing cameras distort your image like a "funhouse mirror," she says—they make noses look bigger and eyes look smaller. This effect is exacerbated by proximity to the lens, which is generally nearer to you than a person would ever stand in a real-life conversation. Looking down at a smartphone or laptop camera is the least flattering angle—as anyone from the MySpace generation will tell you, the best camera position is from above, hence the ubiquity of the selfie stick. 

    We're also used to seeing our own reflection when our faces are relaxed—the concentrated frown (or bored expression) you wear in a Zoom meeting jars with the image of yourself you're used to seeing in the mirror. "Changes in self-perception and anxiety as a result of constant video-conferencing may lead to unnecessary cosmetic procedures, especially in young adults who have had increased exposure to online platforms including videoconferencing, social media, and filters throughout the pandemic," write Kourosh, Channi Silence, and other colleagues.

    Caveat: I'm always leery of proclamations that tech has produced a new disorder — so in the medium-term I'd want to see some large-cohort studies bear out what these plastic surgeons are noticing anecdotally. But the technological and psychological mechanisms they're describing here seem plausible.

    (Photo courtesy Pixabay)

  • Superb, inexpensive microphones built into 12-gauge shotgun shells Photo of

    When my country band the Delorean Sisters was recording our second album, our phenomenal audio engineer Drew Guido set up a mic stand to record my acoustic guitar—and pulled out the weirdest-looking microphone I'd ever seen.

    It was one of the ones you see here, made by Brad Martin of "12 Gauge Microphones". It's a cardioid directional, suitable for acoustic instruments, and it's "handmade from an actual discharged 12Ga shell", as Martin explains on his web site. This particular model is called the "Green12", and it retails for the astonishingly terrific price of $45.

    I say "astonishing" because I've had my guitars mic'd by a lot of different mics in studios, using gear way more expensive than I could ever afford. But the sound on the Green12 beat many of those pricier mics. It's crisp, clean, and responsive. I can't believe Martin gets that much performance out of a $45 microphone.

    I got a chance to buy and use one myself, because a few weeks after the studio session we needed to rearrange one song, a torchy 50s-style country ballad called "Bulleit Rye", written by one of our singers Lizzie Caplan. We decided it should open with just Lizzie singing and me playing a basic country rhythm. But we didn't have time to set up a new studio session, so I had to redo the guitar track at home.

    To try and keep the sound consistent, I bought a Green12 and set it up in my kids' play-room. I tried vaguely to muffle the room's crappy echoes, but mostly failed. I worried it would sound like mud.

    Nope. The mic worked amazingly well, and the track came out great. You can hear "Bulleit Rye" here on Spotify or at Bandcamp; because it opens with just vox and guitar, you can get a pretty good sense of how well the mic performed. It's bonkers that Martin can make these for $45.

    He's got a whole line of 'em, too—I was so taken by the Green12 that I bought his "Red12" ($35!!) and his "12G50" ($95), omni mics that are terrific for getting more room tone in with the guitar, or for doing vox-and-guitar in one take. He's also got ones with tighter patterns, at the same prices.

    They're pretty fun to pull out when you're recording, too, because everyone asks "what the hell is that?"