Fake CGI always looked cooler than the real thing

Neil Emmett writes about fake computer graphics, as so often found in 1980s sci-fi movies and TV productions that were as inexpensive as they were ahead of their time.
Primitive digital imagery has had something of a resurgence across the past decade or so, to the point where pastiches of 8-bit pixel graphics have found their way into mainstream productions such as Wreck-It Ralph. Perhaps it is time that the animators and digital artists of today rediscovered the lesser-known cousin of this aesthetic: the strange world of pseudo-CGI.


  1. Basically, because the CGI looked like crap, they were forced to be artistic.  These days the focus is more on making it look as real as possible, but reality is boring compared to the artistry of the old system. 

    1. Even where reality isn’t boring, attempting to fake reality just doesn’t age all that well.

      At best, your fake will be as good as next year’s fake, making it useless for nostalgia purposes and indistinguishable from stuff that isn’t already being sold as filler for late night cable.

      More likely, next year’s fake is better than your fake, because the technology has advanced; but it will take years, possibly decades(if ever), for your fake to be primitive enough to acquire a certain nostalgic charm, and it will be merely inferior in the meantime.

      1. You guys are missing the bigger picture here: the article is about fake fakes. It’s about a physical 3D model of a city, painstakingly dressed to look like a bad 2D representation of a city. It is about a real person covered in make up and advanced special effects to make him look like a fictional visitor cheaply brought in from the uncanny valley.
        Why does any of it work, ever? Suspension of disbelief. If only the context is engrossing enough, then the effects can be technically good, bad or mediocre and the audience will still enjoy it. 
        However, the win-win situation about faking fakes is that the audience can buy into either one, or both, of the meta-narratives. 

        1. I wouldn´t say they´re missing the bigger picture. It´s all about different aesthetics brought about by different artistic approaches and technologies.

          I usually prefer more or less stylized visuals for video games and movie CGI to the hunt for “realism”, even when they´re based on the same technology.

          1. Same here, but I think that this is because the goal of aesthetic beauty/intentional and unified artistic style/etc are what is being focused on. 

            When the goal is to try to simulate “reality” we get bogged down in what reality looks like which is actually kind of silly in an artistic medium.

            To be honest, this just signals to me that CGI is joining the rest of the art mediums. Pretty cool. Stonecarving and painting have been waiting a long time for new friends.

        1.  Definitely. I was a junior member of that team, and I learned a lot from him and the other senior people as well. It was pretty awesome. :)

      1.  I’ve always loved that logo animation. It helped get me into being a motion graphic designer.

    2. I think this is changing. I work in motion graphics and visual effects both on feature film and commercials and there’s a lot more drive towards artistic interpretations that make waves both at the box office and in broadcast. I think the difference is that there is polish associated with the artistry – we’ve got very, very good at faking it and we’ve got incredible tools to cover for it.

      Films like Sin City, 300, Scott Pilgrim, Kick Arse and the hundreds of other less well known ones that don’t focus on photo-realism have led the way in this regard. Hugo, which won an Oscar for VFX, is another example of a non-photo real film. We didn’t try to make it photo-real, we tried to make it cohesive and it’s own thing.

      I agree some incredible things can come out of the need to innovate but I think these days the innovation is a long way from can-we or can’t-we do it. The innovation is more in ‘what is the best way to do it’ and now, more than ever in the past, that approach doesn’t necessarily include photo-realism. I think that’s pretty cool.

  2. The “Money for Nothing” video had a lot smoother motion in SD on the old TV than on the YouTube version.  Maybe my Flash player in Chrome is borked.

    1. Well, in a sense, it could be argued that some of them were the exact opposite of fake wireframes, because they were literally metal wireframe models being rotated and photographed. (The rotating engineering drawing of the antenna unit comes to mind; that was actually a physical wire model.) None of the “computer graphics” in 2001 were computer graphics, but at least some of them contained real wireframes. This is documented in Tom Sito’s “Moving Innovation: History of Computer Animation”:

      http://books.google.com/books?id=WOwyRnZ1oxoC&pg=PA148&lpg=PA148&dq=%222001+a+space+odyssey%22+wireframe&source=bl&ots=YRhnXjL9w1&sig=C-5VQCkjZeikWINhEwfufVI5kEM&hl=en&sa=X&ei=D–rUZeWLO2j4APlmYHgCA&ved=0CGMQ6AEwCQ#v=onepage&q=%222001%20a%20space%20odyssey%22%20wireframe&f=falseWe call hollow computer graphics “wireframes” because making models out of real wire used to be a common technique in industrial design. (The wires would be shaped to match the cross sections on the engineering drawings, allowing you to get the model shaped just right.)The planet Jupiter itself was also sort of a wireframe — it was long exposures of a rotating semicircle of wire with a cloud painting projected onto it… essentially an analog simulation of texturing a CGI surface of revolution. You can see photos of “the Jupiter machine” in Jerome Agel’s “The Making of Kubrick’s ‘2001’”, on pages 138-140 of the recently-posted PDF. (These techniques were re-used to make Saturn in Doug Trumbull’s “Silent Running”.)Doug Trumbull and the rest of the “2001” effects crew were brilliant at coming up with new ways to make images using different combinations of simple photographic techniques, and the stuff sure looked cooler than the primitive computer graphics in “2010”.

      1. This comment is so interesting and full of detail that it almost deserves its own post.

        1. The slit-scan effect developed in ‘2001’ to create the trippy sequences toward the end was also used to fake CG from time to time. Back in 1988 one of my colleagues at R/Greenberg Associates (rga.com) used it to make the original ‘The More You Know’ animated comet, which was later replaced by a not too different-looking CG version produced elsewhere.

      2. We always used ‘wireframe’ to refer to CG in which the ‘hidden line removal problem’ hadn’t been solved. Wireframes were fast to compute, but could be confusing to watch, and sometimes seem to turn inside out depending on how you interpret them.

        By the time we did the wireframe animation for Disney’s ‘The Living Seas’ in 1985, (for example), our wireframes were opaque and we just called them ‘pencil tests,’ after the cel animator’s term.


    2.  There’s a lot of fakery that goes on in modern film as well. A lot of what people assume is CG is actually real, or what people assume is real is actually CG.

      A modern example of the former would be the new film Oblivion where the clouds in the sky apartment are all actually rear projected on screens around the whole place. The clouds themselves were filmed in Hawaii.

      Another less obvious example would be a section of Game of Thrones series one where apparently there was snow on the ground one morning, but for continuity they needed grass. Instead of replacing the ground in post (a proverbial nightmare) they VFX supes and producers organised a team with flame throwers to come and melt the snow. Practical *and* awesome.

      1. That is cool, although I’ve seen the on-set fix misused sometimes. I was on a location shoot once where the DP was spending a lot of time having a certain area cleared. I told him we could fix it easier and cheaper in post, but he waved me off for a bit. Finally he told me that if the footage was seen at studio dailies with problems in it he wouldn’t be there to defend the decision to leave the clutter on set.

      2. A couple years ago, a bunch of movies were getting filmed in Michigan.  A few scenes from Five Year Engagement were filmed around where I live.  The “snow” they used for a winter scene, which was filmed in July, was nothing more than cotton felt/batting and glitter.  

  3. I’m just not interested in copying anyone else’s style or substance. The field is still so new. I haven’t seen that many artists working in digital who have a deep appreciation for contemporary and modern art history. Very few digital artists even try to incorporate the advanced ways of thinking about art or about making art. I am going to keep trying to incorporate everything I have learned about making art and exploring the new possibilities of computation in art. 8-bit is fun, sure, but it’s ultimately a cop out. It’s treating the computer as a cultural artifact rather than a tool with astounding never before seen capabilities. Call me a Classicist.

    1. “I haven’t seen that many artists working in digital who have a deep appreciation for contemporary and modern art history. ”

      We get driven out of the art department, and walk away from the attempt in disgust. Seriously. This is exactly how I ended up doing what I do instead of finishing my MFA. Maybe its different now, but to be honest people won’t *accept* your work on the above terms. Everything has to be ephemeral… an apparent reflection of contemporary society that upon closer examination is simply a confirmation of the medium’s limitation. As you say, a cop out. It’s an insecure time, people want secure art. When I look at Art Forum right now, for instance… I see stable art and stable media, with a few blips that ultimately almost works to confirm the need to see computers as a shallow artform.

      I see fear. 

      God I hate the world of Art; not artists or the work, but the culture of Art. I loathe the thing.

      And perhaps that’s also part of the problem you bring up. Artists working in this medium are perhaps more inclined to reject that whole system, and view the cultural artifact as the more significant aspect. 

      Nothing will ever change what Art is, but if computers become as accepted as painting they will have to become as static.

      Art confirms, even in the guise of challenging, it always confirms. It can’t do anything else. Art is like the pool Narcissus looked into if you like what you see. And like Echo’s call to Narcissus if you don’t like it. But either way it’s a reflection that confirms something. 

      I don’t know if it is possible for art to negate anything. You have black and white canvases, erased drawings, machines that destroy themselves, installations that are basically gallery frames installed around 4Chan, but it’s still all confirmation.

    1. Absolutely. In “2001”, most of the “computer” displays were rear-projected onto perfectly square film screens — the weird aspect ratio doesn’t date the way a picture of an old 3:4 CRT does, such as the big chunky 1980s CRTs in “2010”. (When modern children see one of those CRTs, they always ask, “Why does it go back so far?”)

      And, of course, the NewsPad (on which Dave Bowman watches the BBC12 broadcast in “2001”) was an amazing magic trick. The set had to be constructed so that a film projector could be hidden under the desk to make it look as if the NewsPad was thin, and hanging the corner of the NewsPad off the edge of the desk really sells the illusion. (If you look closely, when the two NewsPads are showing the same program, their films are a frame out of sync.) Those NewsPads were pretty stunning visions of 21st-century computing…

      …whereas “2010” has Heywood Floyd using an Apple IIc (sitting atop an issue of “Omni” magazine, if I recall correctly.) That scene is now so dated that he should be wearing Zubaz and drinking New Coke.

  4. Some more of my favorite fake computer graphics in movies: “Star Trek: The Motion Picture”. While some of the background film loops (on the little elliptical monitors on the Bridge) are stock footage of CGI (from various people’s scientific visualizations), all the “tactical” displays on the big screen are cel animation.

    I particularly like the one showing Vejur’s plasma weapons splitting up as they encircle the Earth — it’s all in shades of blue, except there are animated red laser beams highlighting the activity. The animators were careful to draw the red lines in a scratchy way to make them scintillate like laser beams.

    “Star Trek II: The Wrath of Khan”, of course, has some famous real computer graphics, most notably the simulation of the Genesis device melting a moon into a new planet, complete with fractal terrain and a particle system. If you look closely, you can see that while the sequence was being rendered, the programmers saw that the camera was going to crash into one of the procedurally-generated mountains, so suddenly a narrow valley appears in the middle of that mountain. (The programmers manually edited a few points’ altitudes between frames.)

    The shot of the computer scanning Kirk’s eyeball right before that clip is also computer animation, though it’s so cartoony that it probably would have looked the same if if were drawn by hand.

    The starfield that the camera moves through during the closing narration is also computer-animated; that was probably the first time you saw the Evans & Sutherland “Digistar” system. Evans & Sutherland made flight simulators, and had entered the planetarium market — anyone my age remembers planetarium projectors as these giant globes covered with hundreds of little lenses, but eventually they were displaced by things that could project computer-animated video. (The first “Digistar” system, as seen in the movie, couldn’t do much more than make white dots and vectors, but its descendants can do full-color video.)

    Anyway, with regards to “Star Trek”, it was right between the first movie (1979) and the second movie (1982) that computer effects started to be feasible for movies (though it would be a few more years before people were mostly using them for things that were meant to exist in the real world, as opposed to things that were meant to look “computery”.) The starfield in “Star Trek II” wasn’t the first computer-rendered image in movies, but it was the first one I can think of that was used to represent something other than a computer display.

    If you want to see really early movie CGI, a good example is “Futureworld” — the dumb-ass sequel to “Westworld” — where a few monitors display simple grayscale polyhedral representations of a rotating hand and a rotating face. Those were Ed Catmull digitizing his body parts while he was still in college. He went on to develop many of the basic techniques and technologies involved in modern animation, such as the RenderMan software. I find the film of his hand to be striking in that it is a huge advance over everything that came before it, yet it also has a primitive quality that makes it seem like something from a century ago, as if it were filmed with a flickery hand-cranked camera… Here’s the background on his hand, and a copy of the film itself.


    1.  your comment is better than the linked-to article.  the vimeo link is awesome!

      1. Thank you. I pride myself on being that type of nerd who, when someone mentions episode #835* of “Doctor Who”, can tell you exactly what sort of wax paper was used to make the monster look like it was made out of wax paper. I try to enjoy everything on three levels: The “follow-the-story” level, the “how-was-this-made” level, and the “wow-this-is-terrible-yet-I-am-fascinated-by-trying-to-figure-out-why-I-can’t-stop-watching-it” level. In other words, there’s something wrong with me that cannot be exploited for commercial gain but lends itself to writing long posts about old sci-fi TV shows and movies.

        (I can ruin any old TV show for small children, especially when I freeze the frame and use a grease pencil to draw a line between Shatner’s toupee and hair.)* Of course there was no 835th episode of “Doctor Who”… because they ran out of wax paper.

  5. And sometimes very badly faked fake CGI is even still very cool. Just like this example: Ooooooh….My Files. Warning, what has been seen cannot be unseen. But we’ve all seen this, haven’t we?

    1. That’s way cool, for 1989! It obviously took a ton of work!

      Is this one from the same series?

      (I love the moonwalking dog…)

      It makes me wonder just how uncomfortable 8-bit footwear would be, with all those sharp-cornered pixels…

      1. Hey, look at that! That’s one of the 3 we did. I’m amazed to see it online. The animator was Russell Calabrese who has gone on to direct all kinds of notable things since then.

  6. Another childhood memory of fake computer graphics:The hand-drawn screenshots for Atari video games in the Sears catalog, where the artists would cheat and put in things like diagonal lines, or curvy speed trails. These days, game publishers still often use Photoshop to tweak the screenshots they put on the back of the box, but usually they don’t resort to such ludicrously impressionistic fakery.http://imbriumbeach.com/wp-content/uploads/2012/03/Image00953.jpg(Note the curving ball trail and relatively smooth diagonals in “Tennis”, and “Pitfall” has more items on the same screen than you’d see in the real game. Both pictures look somewhat like the games in question but are still obvious fabrications.)The earliest NES cartridges also had box art with hand-drawn fake 8-bit graphics, too. For instance, there was one where a soccer ball was rendered as a polka-dotted dodecagon.http://www.racketboy.com/images/soccer.jpg…the slight mis-registration of the color printing plates gives it an interesting (but accidental) cut-paper look. It’s like “South Park” but without all the funny.

  7. One classic example is the television adaptation of The Hitchhiker’s Guide To The Galaxy: fabulous “computer graphics” for The Book, which inspired much admiration and inquiries from the CGI people of the day – until they discovered they’d all been done by hand.

  8. Another great, more modern example would be The Matrix.

    Bullet time was awesome and new(ish) and amazingly analogue.

    When the Wachowski siblings got monies for parts 2 and 3 and decided to do the multi-Smith fight all CG – well, it killed the magic.  It would have been better to have a smaller scene, done the old way.

    Watching this scene I can’t help but see the CG shots – they just pop: real, real, FAKE, real, FAKE, FAKE, FAKE, real, real, FAKE, real, FAKE etc.

Comments are closed.