Interview with developer of 2MP cameras taking those amazing Mars photos on the Curiosity rover

As regular readers of this blog will recall, I asked a question of the Mars Curiosity team about imaging technologies during the post-landing press conference at NASA JPL a few days ago.

Related: Digital Photography Review now has an interview with the Mars rover camera project manager. Above, the 34mm (115mm equiv.) Mastcam from the Curiosity rover. This was developed by Mike Ravine and his team at Malin Space Science Systems, a contractor for NASA. Ravine explains how they developed the 2MP main imaging cameras used to transmit those breathtaking images back from Mars.

The slow data rates available for broadcasting images back to Earth and the team's familiarity with that family of sensors played a part, says [Ravine], but the biggest factor was the specifications being fixed as far back as 2004. Multi-shot panoramas will see the cameras deliver high-res images, he explains, but not the 3D movies Hollywood director James Cameron had wanted.

'There's a popular belief that projects like this are going to be very advanced but there are things that mitigate against that. These designs were proposed in 2004, and you don't get to propose one specification and then go off and develop something else. 2MP with 8GB of flash [memory] didn't sound too bad in 2004. But it doesn't compare well to what you get in an iPhone today.'

(thanks, Michael Kammes)


  1. Not using the latest tech also has the advantage of giving you a few years of world-wide trouble-shooting on your device and its software before you, you know, launch the thing into space; no emergency firmware upgrades required one week after launch and all.

    Still, you’d think there’d be precedent for putting in some kind of designer’s escape clauses, where they say, “if the project has not launched in four years, then the optical and memory systems may be reassessed to include more powerful technologies not considered feasible at the time of the original proposal.” But I guess then you never get off the ground, because each part of your rover’s tech build leap-frogs itself into total stagnation.

  2. The rover camera uses the KAI-2020 sensor, which is a 7.4 um x 7.4 um pixel size.  The iPhone 4s uses a sensor with 1.4 um x 1.4 um per pixel.  When you lower the pixel size, you increase noise, lower detail, and lower dynamic range.  You might think your phone’s 8MP camera is four times better than the ones on the rover, but it’s more like five times worse.

    1. The camera has RGB (and other) filters that rotate in front of it at some point in the chain. Three B&W photos with those filters can be combined into the color images that we see – at higher color fidelity than the method that native color sensors use.

  3. >> “you don’t get to propose one specification and then go off and develop something else.”

    Um, this kind of thinking always bothers me.  It sets up a false either/or premise.  As if components of a long-term project that are *known to be under accelerated development* cannot be anticipated and planned for, so that, you know, you don’t end up with a $2 Billion dollar mission with an eight-year-old (ancient) camera spec.

Comments are closed.