Then the costumes start morphing and growing and throwing off crazy CGI effects, and the whole thing becomes a lovely, hallucinogenic riot of color, movement and song. As Colossal describes it:
Directed and animated by Istanbul-based Gökalp Gönen, a camouflaged cast grooves to Ilhan Ersahin's jazzy new track, "Hurri-Mitanni (Good News)," in a mesmerizing series of transformations. The anonymous characters don amorphous, animated costumes as they dance throughout the streets and in empty pockets of the city, morphing from a floral ensemble to an oversized figure covered in kaleidoscopic spirals to another trapped in string.
Lu wrote an equally epic post about his creative process. He outlines not only his design decisions — and shows his early sketches — but includes pictures of source material from which he drew inspiration, which included a lot of mathematic forms and low-rez early video games. Those latter two fields have a lot of overlap, in a sort of Platonic-forms fashion — the math being the aetherial immortal shapes, and the 80s games, their glitchy and worldly instances.
I remember when I first discovered Line Rider, that Flash game from 2006 with the simple premise of drawing a track for a sledder to ride on. I was immediately sucked into it, doing things like devising elaborate tracks for the rider to overcome, building worlds for the rider to explore, and manipulating the rider to perform stunts.
It had an odd universal appeal, quickly propagating through the internet and reaching many other teens who were similarly captivated by this toy. We gathered into a community and the Line Rider subculture was formed. We were young, creative, and imaginative, but we also had something to prove. We wanted to make impressive tracks, whether it be with highly detailed illustrations or by exerting fine control over the rider's movement.
In 2008, I set off to create the best track of all time, where I would demonstrate proficiency in every style of movement, create elaborate illustrations, and introduce new Line Rider ideas to the community. Of course, I was too ambitious and settled with releasing an unfinished version of the track. While it was widely praised, my vision wasn't complete, and I continued working on it sporadically.
Eleven years later, after I reversed engineered and recreated Line Rider, after I developed as an artist and explored all types of creative mediums, I finally completed the project and even went beyond my original vision, reclaiming the project to tell a new story.
I will never spend so much time on a Line Rider track ever again.
I generally have a few piles of books around my house — I impulse-buy a lot. I start reading most of them, but only rarely finish one.
I usually feel lousy about this — it's my fault for being too easily distracted, for having too little self-discipline, for being a gormless flake, etc.
So, I take great solace from the words of Francis Bacon. Back in 1597, Bacon — a philosopher and onetime attorney general of England — published his Essays, in which he pointed out that not all books ought to be read all the way through. As he wrote:
Some books are to be tasted, others to be swallowed, and some few to be chewed and digested; that is, some books are to be read only in parts; others to be read, but not curiously; and some few to be read wholly, and with diligence and attention.
TESTIFY, my friend.
The challenge, as a reader, is figuring out which book is what type. I probably toss aside books I ought to give a second chance, and "swallow" entirely. On the other hand, particularly with nonfiction and poetry, I really enjoy the odd mental connections that come from semi-random access with super-bursty periodicity: Flipping through a book, reading a passage, then putting it aside for a hour/day/month/year/decade, when I flip it open again to another rando passage.
BTW, it's worth reading entire paragraph in which that passage of Bacon's occurs, because damn, he was on fire — opening by enjoining us not to read not with preconceived notions, then concluding with his famous tripartite dissection of the cognitive value of reading vs. writing vs. conversing:
Read not to contradict and confute; nor to believe and take for granted; nor to find talk and discourse; but to weigh and consider. Some books are to be tasted, others to be swallowed, and some few to be chewed and digested; that is, some books are to be read only in parts; others to be read, but not curiously; and some few to be read wholly, and with diligence and attention. Some books also may be read by deputy, and extracts made of them by others; but that would be only in the less important arguments, and the meaner sort of books, else distilled books are like common distilled waters, flashy things. Reading maketh a full man; conference a ready man; and writing an exact man. And therefore, if a man write little, he had need have a great memory; if he confer little, he had need have a present wit: and if he read little, he had need have much cunning, to seem to know, that he doth not.
So these large projection displays, these colorful, sharp, multicolored displays are made up of overlapping projections of monochromatic slides — these slides are about one inch square each. They're glass slides, not film. And each one is essentially just a black and white image, a piece of glass with a metal film deposited on it that is then etched to have a particular black and white icon or a shape to have lettering or the details of a map or an icon of a spacecraft.
The color comes from filters, and they're illuminated by insanely bright xenon lights. The slides needed to be metal film on glass because those xenon lights were so insanely hot, and needed to be lit for days at a time.
So yeah, all the stuff you see here — that's slides …
As spacecraft flew, the displays could also plot a line that would grow and extend, showing the flight path. How do you do that with slides?
By having some of the projects contain plotters that could scrape a path onto a slide, in real-time. As Blanche describes it …
Four of those five projectors were plotters which were equipped with these special diamond tipped scribes, and they were driven from servos to move a scribe across a slide which was coated completely opaque with a metal film. And as the scribe went across the plate, it would scrape away a tiny little ribbon of metal film and light would be able to project through. And then that light could be colored with any color that they wanted that was in the carousel using any one of the filters that they had available.
I would rather ride a goddamn burro across the continental United States that get on one of those things. "Don't worry, we updated the software." There is no modern statement less reassuring.
But, how can you tell if you've been slated to fly on one?
As Jalopnik notes, Reuters reports that some airlines may stop using the "Max" name, so all you'll know is that you're flying on some sort of 737. So maybe you could just check your booking to see what sort of plane you're on? But airlines' methods of ID vary, and of course, sometimes at the last second they need to swap out jets for unanticipated reasons of maintenance or weather-related delays.
The upshot is that, as Jalopnik notes, you might have to simply figure it out by looking at the jet you're about to board. This assessment would come rather late to be of any prophylactic use, mind you, unless you're willing to skip the flight at the last second when you discover you're about to step onto the creditScore_xxbin32_init.exe of airplanes.
If your booking information doesn't note what kind of 737 you'll be flying, you may be able to spot the naming on the nose, tail or landing gear doors. Some airlines with a high number of 737 MAX aircraft orders, like Southwest, have no prominent markings at all.
At the airport, you can also check the winglets at the end of the wings. The 737 MAX will often have winglets that extend both up and down. Other versions of the 737 often have winglets that extend only upward. However, as some airlines — like United — have upgraded older planes to use the newer winglets, this isn't always a surefire way to determine 737 type, either.
You can select up to three poets for inspiration, including the likes of Emily Dickinson, Walt Whitman and Edgar Allen Poe. Once you've made your choices and picked a structure for your poem, the tool will ask you to compose your first line of verse. The AI will then suggest some more options.
Verse by Verse won't lock you into using those suggestions. You can ditch or tweak them, or accept them as is. The tool is supposed to inspire you, not generate an entire poem on your behalf — though you can more or less do that too. Once you have perfected a stanza, you can add more of them to your future masterpiece.
To build the tool, Google engineers fed the system "a large collection of classic poetry." They then used each poet's own work to fine tune the AI models in an attempt to ape their writing styles. They also wanted the AI to make relevant suggestions, so, according to engineer Dave Uthus, "the system was trained to have a general semantic understanding of what lines of verse would best follow a previous line of verse. So even if you write on topics not commonly seen in classic poetry, the system will try its best to make suggestions that are relevant."
I decided to give a whirl myself.
For my dead-poet collaborators, I picked Emily Dickinson, Emma Lazarus (author of "The New Colossus", the poem about the Statue of Liberty that included the famous lines "Give me your tired, your poor, / Your huddled masses yearning to breathe free"), and Sara Teasdale (who's utterly metal and who I wrote an essay about a while ago).
You can pick whether you're writing in quatrains, couplets, or free verse, how many syllables per line, and whether the poem should rhyme or not. (I picked quatrains rhyming in ABAB format, with 10 syllables — so, mostly iambic pentameter.) I wrote this first line …
The Internet is darkest 'fore the dawn
… and then accepted, in their entirety, the next three lines the software suggested. The second one was by the AI Emily Dickson; the third line, by the Sara Teasdalebot; and the fourth, by the undead pen of Emma Lazarus, as replicated by a neural net that feasted upon a vectorized meal of her life's work.
The result was thus; I added the title after the bots were done with their dark work …
It's … not bad?
Deep-learning AI poetry is, in my experience, pretty hit or miss, with a highly bimodal distribution: The vast majority of the time it produces perfectly-fine, if unremarkable lines, metaphors, ideas, and turns-of-phrasings — and then every so often you get a burst of pure WTF, how-did-a-bot-do-that greatness. In this poem here, the lines the AI wrote here are really just serviceable; not great. To be fair, my original line is pretty meh too; not much for the AI to work off of.
I suspect Google's right, and this type of tool is a bit dull if you use it to autogenerate a whole poem — and much more interesting if you employ it as a prompt, to crack the ice of the mind and get you moving. If you were to roll up your sleeves and tweak/edit the lines the AI generates, it'd be more like the "centaur" intelligence — part machine, part humanity — that chess master Garry Kasparov pioneered back in the 90s.
After being beaten by Deep Blue, Kasparov decided human vs. AI was a rather boring competition, and it'd be more productive to cooperate with AI. So he invented a form of chess where a human using chess software is pitted against another human using chess software. Kasparov found that, once you give a human the machinelike ability to ponder tons of different moves, it produced a creatively new style of human chess. Players explored different routes they'd not have previously had time to investigate. (I wrote about this in the first chapter of my first book; a wee excerpt is here.)
So theoretically one could get the same increased possibility space out of "Verse by Verse". You rapidly iterate a ton of possible poetic gambits, then focus in on the few gems, and hone them. The act of poetic creation becomes less one of composing lines as editing them — the lines produced by the machine. This is more or less how musicians have been using new tune-generating AI. About a year ago I wrote a piece for Mother Jones about that scene, and the songwriters told me the AI tended to have a terrible signal-to-noise ratio: Tons of dreadful ideas, punctuated by some wild ones. So they ran it as as volume business. They used the AI to toss out gazillions of tunes, and their human task became curatorial, trying to sift flakes of gold from the slurry of mud. The French songwriter Benoit Carré used a Sony music-generating AI, and described it to me this way:
"You are a little bit like an artistic director or a producer, and you have a crazy musician in the room," he told me. "Most of the time it is crap," but every so often the machine kicked out a melody he would never have thought of. Carré helped write the lyrics and recorded the album with a group of meatspace musicians. It certainly wasn't push-button easy. If anything, sifting through the AI's output for useful, provocative passages was like panning for gold—probably more work than writing everything himself. But the silicon intelligence helped Carré break out of ruts. "In pop music, you know, it is always the same chords," he says, so to do something new "you have to be surprised, you have to be shaken."
I should say clearly: I am absolutely 100% not talking about an editor that "writes for you," whatever that means. The world doesn't need any more dead-eyed robo-text.
The animating ideas here are augmentation; partnership; call and response.
The goal is not to make writing "easier"; it's to make it harder.
The goal is not to make the resulting text "better"; it's to make it different — weirder, with effects maybe not available by other means.
This nails it, really. I'm going to play around some more with "Verse by Verse" and see if I can get it to drag my brain into a corner I wouldn't normally go on my own.
(As as side note, it looks like the poets they used to train the AI are from before 1923 — so they're in the public domain, in the period before copyright slams down. It's too bad; I'd have loved to be able to do this with some poets from later in the 20th century.)
I particularly dig Číž's approach to thinking about the ecology of code. To help make sure Anarch runs anywhere, he wrote it without using any external dependencies, so it won't suddenly die when a third-party library isn't updated.
Also, not only is the code free for tinkering without restrictions, but he commented it really nicely, so any tinkerers know precisely what's doing what. (Seriously, I took a peek at his comments and they're a model of the craft. If more software was commented like this, the world of code would be considerably less of a mess.)
Anarch is, as Číž says, "made to last for centuries without maintenance". I wish more software projects aimed at that north star.
Completely public domain (CC0) free softare, free culture, libre game for the benefit of all living beings in the Universe, no conoditions on use whatsoever. All art is original work and licensed CC0 (as well as code). 100% non-commercial, free of any ads, spyware, microtransactions, corporate logos, planned obsolescence etc. Extemely low HW demands (much less than Doom, no GPU, no FPU, just kilobytes of RAM and storage). Suckless, KISS, minimal, simple, short code (< 10000 LOC TODO). Extremely portable (much more than Doom). So far officially ported to and tested on: GNU/Linux PC, SDL and csfml GNU/Linux PC, terminal Browser Pokitto (220 x 116, 48 MHz ARM, 36 KB RAM, 256 KB flash) Gamebino Meta (80 x 64, 48 MHz, 32 KB RAM, 256 KB flash) TODO
Has completely NO external dependencies, not even rendering or IO, that is left to each platform's frontend, but each frontend is very simple. Uses no dynamic heap allocation (no malloc). Can fit into less than 256 kb (including all content, textures etc.). Uses no build system, can typically be compiled with a single run of compiler (single compilation unit). Works without any file IO, i.e. can work without config files, save files, all content and configs are part of the source code. Doesn't use any floating point, everything is integer math (good for platforms without FPU). Written in pure C99, also a subset of C++ (i.e. runs as C++ as well, good for systems that are hard C++ based). Made to last for centuries without maintenance. Goes beyond technical design and also attempts to avoid possible cultural dependencies and barriers (enemies are only robots, no violence on living beings). Created with only free software (GNU/Linux, GIMP, Audacity, gcc, Vim, …). Single compilation unit (only one .c file to compile, very fast and simple). No build systems. Uses a custom-made 256 color palette (but can run on platforms with fever colors, even just two). Well documented and commented code, written with tinkering and remixing in mind. Has the oldschool feel of games like Doom or Wolf3D.
How? With lasers! Spy agencies since the 1940s have eavesdropped on people using "laser microphones": You shine a laser at the window of a house, which vibrates every so faintly from the sound of whoever's inside, talking. You use the vibrations thusly recorded from the window to reconstitute precisely what the target was saying. Sneaky; clever.
These days, we've got plenty of devices shining lasers inside our houses — such as robotic vacuum cleaners. Many new models rapidly swivel a "LIDAR" laser around, bouncing the laser off objects in our house to help the robot navigate obstacles and walls.
So scientists wondered — huh, could you hack into a robot vacuum cleaner, record the laser info, and reconstitute our personal conversations based on the resulting vibrations coming off objects inside our house?
Indeed you can! A little bit, anyway. They figured out how to hack into a Xiaomi "Roborock" vacuum, and got it to recognize a few types of sound — including spoken digits (from "zero" to "nine"), and clips of the introductory music from news shows on networks like Fox and CNN. (Their paper describing the exploit is here.)
Why those particular forms of sound? Well, being able to eavesdrop on numbers could let you steal sensitive info like social security numbers, the researchers figured. And knowing what TV shows someone watches gives you a glimpse into "the victim's political orientation or viewing preferences".
… They conducted experiments with two sound sources. One source was a human voice reciting numbers played over computer speakers and the other was audio from a variety of television shows played through a TV sound bar. Roy and his colleagues then captured the laser signal sensed by the vacuum's navigation system as it bounced off a variety of objects placed near the sound source. Objects included a trash can, cardboard box, takeout container and polypropylene bag—items that might normally be found on a typical floor.
The researchers passed the signals they received through deep learning algorithms that were trained to either match human voices or to identify musical sequences from television shows. Their computer system, which they call LidarPhone, identified and matched spoken numbers with 90% accuracy. It also identified television shows from a minute's worth of recording with more than 90% accuracy.
Granted, this exploit isn't that easy, as yet. It only worked because the researchers pretrained a deep-learning model on sound clips of their target people speaking the digits "one" to "nine" — in the real world, of course, it might be trickier to get your hands on lots of training data of your target speaking.
But it's another reminder that hey, every "smart" device we own is also a full-fledged computer — usually outfitted with shoddy or nonexistent security, and thus super hackable — that is also loaded with sensors, providing any intruder with a metric truckload of data about our home habits.
Open Adobe Reader and head to Edit > Preferences (using the Ctrl + K shortcut if you wish). There, choose the Accessibility tab from the left sidebar. Next, look for the Document Colors Options section. Inside this, check the Replace Document Colors checkbox, then choose to Use High-Contrast colors. Finally, select the White text on black from the dropdown box next to High-contrast color combination. This theme is easiest on the eyes. Now, click OK to leave the Preferences window. You should immediately see that any opened PDFs now display in dark mode.
Nonetheless, a lot of much, much older hits also have been breaking into Spotify's top 200 during lockdowns, as the Guardian notes …
Toto's Africa made Spotify's UK daily top 200 only 12 times in both February and March. But this had risen to 28 times by May. This increase was eclipsed by the huge surge in popularity of Electric Light Orchestra's 1977 classic Mr Blue Sky, which charted only once in January but peaked at 31 times in May. Another 1977 hit, Fleetwood Mac's Go Your Own Way – one of several tracks by the band to make the top 200 – enjoyed similar success while Here Comes the Sun by the Beatles from 1969, which had never been in the UK's top 200 in the months leading up to Covid-19, made it into the listings 19 times in May and was played up to 63,000 times a day.
Other old songs that benefited from the nostalgia trend include Oasis's Wonderwall and Don't Look Back in Anger from 1995, Queen's Don't Stop Me Now from 1979 and Snow Patrol's Chasing Cars from 2006. Elton John's Tiny Dancer from 1971 and Bryan Adams's 1985 hit Summer of '69 also crept into the UK top 200 during lockdown. On Friday, the reissue of the Rolling Stones' 1973 album Goat's Head Soup went to No 1 in the UK album charts.
Nostalgic listening is a bit of a lagging indicator, apparently — as Yeung found in his paper, the peak of nostalgic listening is "roughly 80-100 days after the first day of the lockdown". We apparently need to be cooped up for a few months before we start sweatin' to the oldies.
He also compared 2020 listening trends to pre-COVID 2019, and in most cases you can really see how nostalgic listening rose as the pandemic wore on. In this chart below, the blue line is 2020 listening, the dashed red line is 2019, and the Y axis is labelled — deliciously — "Average Nostalgia Level". The solid red line marks the point at which Yeung considers the pandemic to have begun in Europe …
The Bremont Hawking incorporates a sample of wood taken from a vintage desk drawer that Hawking inherited from his grandmother. It had been gifted to her as a retirement present, marking her role in founding Yorkshire's Boroughbridge girl's school. It became the desk that Hawking would sit at to recall fond childhood memories and to compose some of his theories on.
While the wood samples may be the most dramatic Hawking artifacts on display, they are far from the only ones. A meteorite sample at the center of the design reinforces Hawking's connection to outer space, and the serial number at 6 o'clock is printed on paper from original copies of Hawking's massively influential 1979 research paper "The 'nuts' and 'bolts' of gravity." Completing the scene is a complex etching of the stars visible over Oxford, UK on the night of Hawking's birth along with one of his equations, flanked by the title of his seminal book "A Brief History of Time." Despite the classical appearance, the case still manages a respectable 100 meters of water resistance.
And at first I wondered whether this whole concept had some ghoulish backstory — how in god's name did these watchmakers get their hands on Hawking's desk?
Bremont worked with the Hawking family on the watch and has pledged to donate a percentage of the proceeds from sales of the collection to the Stephen Hawking Foundation, which supports cosmological research and people with motor neurone disease, the debilitating condition Hawking lived with for almost his entire adult life
If you scroll down to about 3/4 down that page, you can see where they've embedded the tool. You can select a breed from a drop-down menu to see how it's related to other breeds, or — what I find more interesting — pick a group of breeds and see all the different ways they link up.
Here's the group linkages for "toy spitz", which contains my personal fave breed, the Pomeranian. (My twitter handle is @pomeranian99, and there's a Ukranian-Canadian farming story behind it, heh.) Anyway, you can that breed's adorable happy little face jutting out slightly at 2 o'clock in the screenshot below.
The linkages surprised me — I didn't know know it was related to the Pug, for example …
Here's the "retriever" group …
… and if you hover over any particular breed, it shows you the level of relatedness between any two breeds, with the numbers getting higher the more related they are …
The Lamborghini Huracan they used, which was specially modified for such tasks with a refrigerated frunk, was obtained by the Italian police back in 2017. Speedy organ delivery isn't its only job, though. In fact, it's a regular patrol vehicle for the department with lights, a police computer and other equipment necessary to perform traffic stops and arrests. [snip]
It's unclear why a helicopter or something of the sort wasn't used, as that would've likely been even faster. Perhaps none were available, or there would've been a delay to get one. That's not to say this wasn't an effective use of the Huracan's 5.2-liter, 602-horsepower V10, though. If anything, it's probably one of the best ways to use a supercar.
(The Creative-Commons-3.0-licensed photo of an Italian police Lambo, above, is courtesy Wikimedia)
There are lots of reasons COBOL's still around, but one that's intriguing is that three-decade-old code is stable as hell. You have code in production that long, it's been debugged within an inch of its life; and they've had decades to tweak the compilers — which turn the COBOL into instructions for the machine — for massive efficiency.
This turns the new-new-thing hype of Silicon Valley on its head, of course. It's a sector obsessed with the hottest, latest thing — but as any programmer can tell you, the hottest, latest thing that just got shipped at 2 am is gonna be a rickety kluge of buggy code. That old COBOL, though? In service since the days of early MTV? It's tough as old boots.
Remember the hype cycle this spring about how unemployment systems in various US states — pounded with new demand by COVID unemployment — couldn't keep up with demand? The culprit, state officials said, was COBOL: Those antique systems just couldn't keep up!
This idea — that older code can not only be good, but in crucial ways superior to newer code — is at odds with a lot of Silicon Valley myth making. Venture capital-backed startups usually tout the shiny and novel. Founders do not prance around boasting about how old their codebase is. Quite the opposite: They brag about their code being cutting-edge, pounded out in all-night sessions by bleary-eyed genius 21-year-olds. But as nearly every programmer will tell you, the newer and more recently written the software, the more likely it is to be a hot mess of bugs.
A good example of this could be witnessed during the pandemic. In the early days of Covid-19, businesses shut down en masse. Laid-off employees swarmed online to apply for unemployment benefits, and the websites for many state governments crashed under the load. In New Jersey, the governor told the press that their COBOL systems desperately needed help to deal with the new demands. "Literally, we have systems that are 40-plus-years-old," he noted.
But technologists who were working behind the scenes to fix the problems knew that the number-crunching COBOL wasn't the problem. That old stuff was working fine. No, it was the newer stuff that had crashed — the programs powering the website itself.
"The thing that went bananas was this web application in between the mainframe and the outside world. That was the thing that sort of fell apart," says Marianne Bellotti, a programmer and writer who worked for years on government systems, and who observed New Jersey's system. But it's too embarrassing, as the historian Hicks points out, to admit that "oh, our web systems broke down."
Bellotti's seen the same thing happen with other government agencies, like the IRS. She was called in once to help with an IRS web app that wasn't working. When they investigated, they found that, indeed, the problem was in newer programs, "this chunk of poorly written Java code". The mainframe running COBOL, in contrast, was racing along like a Ferrari.
"The mainframes," she says, "were responding within milliseconds."
That photo is of Grace Hopper, holding a rad-looking COBOL manual. Hopper is often referred to as the "mother of COBOL", but this is not quite true. It's true that she deeply influenced the development of COBOL, particularly by creating the precursor language FLOW-MATIC (possibly the most metal name ever for computer language).
Inspired by rising literacy rates and advancing technologies, the nineteenth century saw the book transform from a largely hand-made object to a mass-produced product. In this new environment the book cover took on added importance: it was no longer merely a functional protection for the pages but instead became a key platform through which to communicate and sell the book. Prior to this covers had — bar a smattering of highly bespoke one-off creations (e.g. embroidered covers for personal libraries) — mostly been plain leather bound affairs. From the 1820s, with the rise of mechanical bookbinding, these leather covers of old gave way to new cloth coverings which, in addition to being inexpensive, were now also printable. A wide variety of cover printing techniques were employed over the decades: from embossing to gilt to multi-colour lithography. A totally new artistic space was opened up. As you can see in our highlights below it was one in which illustrators and designers flourished, producing a range of covers as eclectic in aesthetic approach as the myriad contents they fronted.
You can see them all at the Public Domain Review's web site, but here are a few of my faves — though it was hard to pick from this bouquet of gorgeousness …
It's really hard to convey how big, how complex these wall pieces are. I'd like to explain a bit how I came to make these. You've all seen here the work I do based on living things: birds, other critters, people, and anatomy. I use fairly small typewriter components for that work because it tends to better represent the kinds of anatomical features in people and animals. Over the years I've had to jettison some of my typewriters and components because I had to move or in lean times when I couldn't afford a studio, but I've had my current palette of parts and machines for about 15 years. I've been carrying around in that time almost literally a ton of parts that are big and bulky and for which I haven't found an application in my sculpture. So I find that I have, for example, scores of chassis from IBM Selectrics, five or six Olympia frames, or 15 Remington platens. After my residency in Mumbai, India at Godrej, at which I made, with a lot of help, wall-hanging "mandalas" and a 13-foot tall lotus, I realized that I could take all of these matching components that I've been carrying around from studio to studio to make more mandala-like reliefs using the same methods learned.. This one here is made from 3 Olympia desktops (one of which you'll notice was an Arabic machine, which was given to me by Tom Hanks, broken badly), keyboard parts from a few Smith Coronas, some mustard yellow Royal parts, some big gold anodized parts from Selectrics, Selectric frames and keys, and thousands of other connective springs, screws, nuts, pins, bits, and bobs. There are about 3,000 parts. I lost count of the hours I worked on it, but you might be able to imagine this taking a few months of full-time work. The sculpture is 5 feet in diameter and weighs about 80 lbs.