Former FX tech person points out the racist trajectory of skin and hair CGI

Theodore Kim is a Yale computer science professor who's worked for years on techniques for rendering computer-generated humans; he used to be a senior research scientist at Pixar.

In Scientific American, he wrote a fascinating piece detailing the racial dynamics of CGI. While computer graphics has become enormously better in recent decades at reproducing the complex optical physics of skin, they've focused most of their efforts on mastering aspects of white skin, like "translucency".

This has knock-on effects for stardom, as Kim notes:

However, translucency is only the dominating visual feature in young, white skin. Entwining the two phenomena is akin to calling pink Band-Aids "flesh-colored." Surveying the technical literature on digital humans is a stomach-churning tour of whiteness. Journal articles on "skin rendering" often feature a single image of a computer-generated white person as empirical proof that the algorithm can depict "humans."

This technical obsession with youthful whiteness is a direct reflection of Hollywood appetites. As in real life, the roles for computer-generated humans all go to young, white thespians. Consider: a youthful Arnold Schwarzenegger in Terminator Salvation (2009), a young Jeff Bridges in Tron: Legacy (2010), a de-aged Orlando Bloom in The Hobbit: The Desolation of Smaug (2013), Arnold again in Terminator Genisys (2015), a youthful Sean Young in Blade Runner: 2049 (2017), and a 1980s-era Carrie Fisher in both Rogue One: A Star Wars Story (2017) and Star Wars: The Rise of Skywalker (2019).

The same thing goes for hair:

The technological white supremacy extends to human hair, where the term "hair" has become shorthand for the visual features that dominate white people's hair. The standard model for rendering hair, the "Marschner" model, was custom-designed to capture the subtle glints that appear when light interacts with the micro-structures in flat, straight hair. No equivalent micro-structural model has ever been developed for kinky, Afro-textured hair. In practice, the straight-hair model just gets applied as a good-enough hand-me-down.

Similarly, algorithms for simulating the motion of hair assume that it is composed of straight or wavy fibers locally sliding over each other. This assumption does not hold for kinky hair, where each follicle is in persistent collision with a global range of follicles all over the scalp. Over the last two decades, I have never seen an algorithm developed to handle this case.

This is, alas, an old story in visual media. In the early years of camera film and TV cameras, the technological focus was — as with today's CGI — on capturing white folks. So the innovators were terrible at capturing black and brown faces: Back in the 1960s, Kodak got complaints from African-American mothers who pointed out that class photos of their children looked noticeably less detailed than their white peers — because "the film wasn't calibrated to deal with that kind of range of exposure," as a photographer later discovered.

Kodak was aware of the problem but didn't do anything about it — until, as the Concordia University scholar Lorna Roth has written, Kodak got complaints from manufacturers of chocolate and wooden furniture, who complained Kodak film wasn't showing off the details of their products for advertising. So, chocolate bars and coffee tables: That, finally, got Kodak to produce film that was slightly better at capturing darker tones.

You can't make this stuff up. On top of camera and film technology, there's lighting: Lighting standards for film and TV have been fine-tuned for white skin, such that today's techs can be utterly hapless at lighting everyone else, as Harvard professor Sarah Lewis wrote about here.

Anyway, Kim's point in Scientific American here is a really good one. The question of who gets to be in front of the camera has a lot of layers — but one of them is the subterranean choices, made years or decades earlier, by the makers of imaging tools.