HOWTO write more secure free/open source software

Having recently conducted a security audit of several free/open source software programs for the Electronic Frontier Foundation, Chris Palmer and Dan Auerbach have published some guidelines for improving security in free/open software:

Avoid giving the user options that could compromise security, in the form of modes, dialogs, preferences, or tweaks of any sort. As security expert Ian Grigg puts it, there is “only one Mode, and it is Secure.” Ask yourself if that checkbox to toggle secure connections is really necessary? When would a user really want to weaken security? To the extent you must allow such user preferences, make sure that the default is always secure.


    1. “Avoid giving the user options…”
      immediate disagree. trading freedom for security…

      I disagree. Part of good engineering is anticipating things your users will do wrong and actively preventing them. Example: The other day one of my acquaintances tried to fuel their car with diesel fuel instead of regular gasoline (petrol). They were prevented from doing so because the dispenser nozzle was the wrong size and shape to put into the car. When they told me the story, I thought, “a ha, some engineering anticipated that stupidity.” Another example: washing machines, saws, environmental test chambers, etc. that turn themselves off when you open the door or safety cage.

      It shouldn’t even be possible to make insecure connections with consumer technology.

      1. don’t get me wrong, i agree with it as a default. my problem is not giving the option. they could leave it in an ‘advanced’ section so as not to clutter the ui, and to dissuade users from changing it. they could have a warning on it, etc. but an engineer cannot often predict every situation, and sometimes you might want to turn off security.

        and i know it is open source and “everyone” can change it. but “everyone” isn’t a programmer.

        plus the article isn’t even “options to turn off security” it’s “options that could compromise security”. sometimes you are willing to make that trade off to for a specific (often temporary) purpose or to achieve a certain goal.

        all i’m saying is i don’t want to be limited for my own safety. warnings would be good, but if i have clicked through a warning to enable it, it means i know what i’m doing.

        it’s like if cars couldn’t go faster than 100kph unless you are a mechanic because non-mechanics presumably can’t conceive of the danger. but sometimes my child has severed its arm or whatever. bit melodramatic but there you go. haha

    2. Well, since it’s open source, anybody can turn off the security stuff anyway if they really want or need to. That does not mean you should confront every user with a potential security hole. There is a freedom in knowing you will not be exposing private data to the world.

    3. Their meaning was to avoid giving the user *security related* options, like the ability to turn important security features off.  This makes sense.  If an attacker gets through a  slightly weak area of the system and can manage to turn off the user’s security options, he has now massively weakened the protection measures and this would lead to wider system access and user monitoring.

  1. ‘there is “only one Mode, and it is Secure.”’
    Doesn’t that sound a bit like a fruit-flavoured tech company which you despise? 

  2. re: ‘avoid giving the user options’.   Why am I reminded of LA Story?

    “Your usual table, Mr. Christopher?”
    “No, I’d like a good one this time.”
    “I’m sorry, that is impossible.”
    “Part of the new cruelty?”
    “I’m afraid so.”

  3. Sometimes users know what they want to do, and want to do something the software maker did not anticipate.

    And by sometimes, I mean often.

    1. Programmers frequently make the mistake that, just because they wrote the software, they know what it is for.   I don’t mean that sarcastically, but as a literal truth.

      Imagine what Twitter would be like if the authors insisted that you only posted what YOU were doing, RIGHT NOW.   (That was the original idea, after all.)

    1. I see there a perfect set of rules for *any* kind of software, why confine it to FOSS?

      Because security has costs associated with it: would you object to all bicycles being sold with a 50lb chain welded to the frame? After all, they would be more secure that way.

      A security layer adds a processing and data overhead that may be unacceptable. I certainly wouldn’t be happy if my phone’s mail client forced me to use SSL over a GPRS connection, just as Cory might not be happy to pay for a server upgrade so we can all benefit from the added security of accessing this site via https.

      In some contexts, Grigg’s hypothesis is valid, but not always. Even in the case of Skype – which he cites as an example of something which is easy to set up securely because secure is the only mode, in contrast with, say, an email client – the cost of encryption is that it only works on more powerful systems than would be the case if there was no encryption.

  4. I want to mention that 95% of users do not change any settings (link) and leave them as default. I certainly agree that security features should be enabled by default.

    Remember that freedom comes in two flavours: “freedom to” and “freedom from”. As a developer, I kinda want the freedom to disable security features (mostly for debugging issues), but I know it’ll eventually come back to bite me in the ass if that option exists. I’m glad that microwaves always stop shooting microwaves when the door is open, and that mom didn’t accidentally disable that when she was trying to set the display clock.

  5. There’s something to be said for “make the default secure”, but there is a very real reason why checkboxes to toggle secure connections and the like are really necessary — namely that not all installations are going to be using the latest versions of all the software necessary to allow these connections to work at any point in time. It’s easy to say “well, update your software then” — but not everyone is using their own machine with root access where that would be feasible. In a big organization, it’s not uncommon for systems to be five years out of date — not because IT is lazy, but because they want to roll out system updates that are guaranteed not to break needed programs.

  6. “To take advantage of volunteer effort to crowdsource security auditing, the barrier to entry for understanding the code has to be quite low.”

    Heh. The number of people who can actually do any sort of useful security auditing is almost vanishingly small, regardless of how simple the code seems. Most of these people are not working for free. Seriously — try hiring a security firm to audit your source code. We’ve done it. It’s expensive and even still they don’t find that much. Crowdsourcing this kind of work is like crowdsourcing a mars rover design.

    None of the other points are in any way specific to free/open-source software.

Comments are closed.