Falsehoods programmers believe about time - riff on the malleability of computer time

Discuss

40 Responses to “Falsehoods programmers believe about time - riff on the malleability of computer time”

  1. yri says:

    So you’re saying programmers assume that time is a strict progression of cause to effect, but *actually* from a computer’s viewpoint – it’s more like a big ball of wibbly wobbly… time-y wimey… stuff.

  2. corydodt says:

    Old, old riff. It teaches you, as a programmer, that no matter how simple you think the system is that you’re modeling, it can still surprise you. Programmers need to learn this lesson early.

    It’s not entirely  a comment about how time works inside of computers. It’s a comment on how time works in the real world. Modeling real-world time inside of a computer is arcane beyond your wildest imaginings, and every time you think you’ve got it, there’s another layer of fucked-up exceptions to handle. There are massive data files just devoted to figuring out what fucking time zone you’re in.

  3. Spinkter says:

    He left out the fact that one minute doesn’t necessarily always contain 60 seconds.  Sometimes they contain 61 seconds.  More rarely, they contain 59 seconds.

    This happens when a leap second is added to or subtracted from UTC.

    Given that leap seconds are announced only six months before they happen, they’re a real PITA to account for.

  4. ChickieD says:

    Many years ago I was a technical writer  for a cell phone company’s R&D Lab. One of our programs involved 5 or more computers all running different software from different vendors. To sync the time between these programs, the engineers used the clock off of the GPS satellite to feed all the program clocks. Now that GPS is such a common part of our lives, I assumed that most computer systems would use it as a kind of universal time, which seems like it would solve a lot of these issues.

    • kmoser says:

      Only when all computers have constant access to GPS…which means never.

      • Todd Knarr says:

         NTP. Network Time Protocol. Win7 has it built in, and for versions like XP and earlier you can get software like Tardis2000. Find a couple of Stratum 2 hosts near you and sync a couple of your machines to them, then sync the rest of your network to those. I’ve been doing it for 15 years, it can’t be _that_ hard.

    • Dick Selwood says:

       That assumes that GPS is present. A recent study suggested that there are both natural events, like solar activity, and man made events, like GPS jamming, that could kill the satallite signal.

    • Well okay but which GPS time do you use? UTC has occasional leap seconds which play hell with some types of real time software.

  5. Dan Hibiki says:

    2+2=5 for extremely large cases of 2

  6. jlargentaye says:

    As the author himself points out, his list is riffed off another classic, Falsehoods programmers believe about names, which is worth just as much of a read.

    As a programmer myself, the hidden complexity of things can sometimes be paralyzing, as I worry about what corner case I missed that would make the whole design broken.

  7. Manny says:

    This reminds me of a game I played back in the stone age that timed action off processor cycles. It went insane when the system was run on a slightly newer PC with a faster CPU.

  8. cleek says:

    oh those stupid programmers.

  9. Paul says:

    Leap years occur every 4 years.
    From the state/province you can determine the time zone.
    From the city/town you can determine the time zone.
    Time passes at the same speed on top of a mountain and at the bottom of a valley.
    One hour is as long as the next in all time systems.
    You can calculate when leap seconds will be added.

    • flosofl says:

      I think you’re more or less missing the point.

      One hour is as long as the next in all time systems. 

      Yep. Missed the point.

      • knappa says:

        I’m fairly sure he was simply adding to the list.

      • invictus says:

        One of you two certainly is missing the point, yes.

        • flosofl says:

          But now my brain is all asplody!

          Darn invictus for making me revisit my initial assumption and realizing I was WRONG!

          DARN YOU TO HECK!

          So Paul, you can stop sitting there silently wondering if I’m an idiot.

          I am.

          • invictus says:

            Now I have to silently wonder if your reply to my snarky comment on your snarky comment on the original comment was, in fact, snarky.

            HelpI’mtrappedinarecusrivecommentthread!

    • “Time passes at the same speed on top of a mountain and at the bottom of a valley.”
      Depends whether you can have more fun on top of a mountain or at the bottom of a valley.

    • Not true. In the Gregorian calendar, leap years occur every 4 years, unless the year is divisible by 100. Then it will not be a leap year. Unless the year is also divisible by 400. Then it will still be a leap year. The year 2000 was one of those.

  10. “A time stamp of sufficient precision can safely be considered unique.” Heard about some code today that used a timestamp as a unique ID. When the coder realised that there might be several calls made within a single tick of the timer, he decided to solve the problem by making the code go to sleep for a while until the time was unique again…

  11. You can avoid a lot of the inconsistency and insanity if your admins practice good safe timekeeping (use NTP, don’t set up too many NTP servers, etc).  But even that can drift, even atomic clocks and GPS time do strange things at times, code has bugs, etc.

    But that does not deal with the insanity in timezone offset shifts, virtual server stuff, all the corner cases.  This is a good list to print out and read a lot and hang on your wall, to remind you.  As someone upthread said, can’t let it paralyze you, but at least you have a fighting chance if you keep a reminder.

  12. beemoh says:

    If only that photo was taken one minute earlier.

  13. dolo54 says:

    This highlights what has been bugging me about test driven development. Instead of spending time from the middle to end of a development cycle debugging you spend time up front writing tests, which should eliminate most of your debugging time towards the end. But then your tests may have bugs, so now you have spent all this time up front writing tests, then still spend time at the end debugging those tests. I’m not entirely convinced this is an improvement. I would like to see some raw data from software companies that have implemented TDD. How much time have they really saved I wonder. Possibly they are spending even more time writing tests than they would have done just debugging untested code.

    • Keith Tyler says:

      Properly done, a QA group writes tests in advance anyway. The question is whether those tests are written ahead of development or concurrently/subsequently.

      But also, properly done, business requirements and functional specifications precede either of those things. The tests then follow (or envelop) the letter of the functional specification.

      More often that not, such flaws as you mention are in the functional spec rather than the test cases, which were faithfully descending from the functional spec. Which is why QA involvement in functional spec review is imperative.

      Of course, all this requires that you actually develop functional specs in advance of development. Which sadly seems to be becoming less and less common.

      Now if you’re talking about *automated* tests, then those could of course have bugs. But you would find that out upon investigation of the automated failure with a manual duplication. If people are using automated tests to verify new functionality, that’s a mistake, for the exact reason you point out. New functionality should always be verified by manual test; then the automation test is developed, and verified by running against the environment in which  the manual test passed.

      Besides, new functionality is commonly a moving target, especially in the rapid development paradigm. Trying to develop automated tests on new functionality is a prime cause of disillusionment in automated testing. The work required to continually redevelop the automated test to fit the ever-changing requirements is a losing battle; the ROI is almost always negative. Automation is best used for regression (including previously fixed bugs as well as pre-existing functionality). An additional benefit of this is that your automation development is not affected by release-date panic — meaning less likelihood of errors.

      (I could probably write a book on this. I probably should.)

      • dolo54 says:

        Perhaps you should write a book, I’d read it. Yes I’m referring to the writing of automated unit tests for new functionality. That is, writing a test for every method before you write the method, which is encouraged by many proponents of Agile development, and is referred to as TDD or Test Driven Development.

        Our department head recently asked me if I thought we should implement unit testing for all our applications, many of which are small web apps that take 80 man hours or less of dev time. Currently we factor about 8 developer hours to 40 for QA and bug fixing. So if a project takes 40 hours to develop, we assume it may take another 8 hours of developer time to resolve any issues discovered in QA. We usually come in under that estimate. My response was well if we have to write a test for every new method in a class, test the test to make sure it fails reliably, then write the code to pass it, we will seemingly double our development time. So an app that would take 40 + 8 hours to get out the door would now take 40 + 40 + probably another 4 hours, since a lot of ‘bugs’ we get from QA are feature requests missed in the functionality specs. So no it makes no sense at all.

        I could see TDD being more useful in a huge sprawling application with many developers and a constantly changing set of requirements, just to make sure people don’t break each other’s work, but I’m not sold that it makes sense for situations outside of that scenario.

        Of course manual testing is quite important and I encourage all the devs I work with to test their own work as they are coding. This takes a bit of time as well, but nothing like it would take to write tests for everything.

    • corydodt says:

      Most of the people I talk to about TDD (myself included) do not even attempt to thoroughly cover code before writing an implementation.

      I write a few tests to call APIs, just to remind myself what I’m shooting for and what shape an API client will be when I’m done.

      Then implement, then go back and thoroughly cover.

    • Noah Sussman says:

      It’s extremely difficult to measure time saved by defensive practices like writing readable, tested code; this is a bit like trying to measure the number of attacks mitigated by good security.  In any case, only large organizations are willing to spend the time and money to attempt to measure the effectiveness of code quality.  One of the few decent studies I’ve seen is The Power of Ten by Gerard Holzmann from JPL/NASA.  Also there’s a new book called “How Google Tests Software.”  I haven’t finished reading it yet but it does seem to contain primary data about how writing tests has benefited Google products.

      I would agree with you that the cost of automated testing must be taken into account, not just in terms of development time but complexity and debugging time as well.  I find it’s still not widely understood that automation adds complexity to a task rather than reducing it.  “Ironies of Automation” is a good paper on that phenomenon.

  14. Keith Tyler says:

    As a 10-year QA veteran, it has always floored me just how naive programmers are on the issue of time, and especially time zones. It usually takes a painful amount of head-hammering with a litany of non-conforming examples for them to realize just how much of a mess it all really is, and that their feeble and self-satisfying attempts at compartmentalizing it into something algorithmically simple are utter folly.

    They should consider themselves lucky they don’t have to deal with the Gregorian Reform.

    • Ultan says:

       Some people enjoy getting into all the pathological corner cases, going well beyond the Julian/Gregorian switch but also the subtle differences between the various sorts of astronomical time, dynamical time, TAI vs. UTC vs. GMT  etc. One of those people is Alan Eliasen. His physically-typed JVM language Frink does all that and works with several dozen different units of time, as well as several hundred other sorts of physical units as well. It also has some other features like rational arithmetic and interval arithmetic which might be of interest to QA and other people interested in strict correctness.

  15. hugh crawford says:

    I remember setting up a test case where users in Mecca, and somewhere in Australia, were getting long term tech support from a call center somewhere in Indiana. The goal was to figure out the time and duration of the incident as a whole and the 
    time and duration if the phone call.

    About that time I got called for jury duty and I got asked what my job was. I said that I was leading a group of engineers trying to figure out what time it was, so the lawyer asked me to explain why that was such a big deal. I was out of that jury pool in a very short but indeterminate time after that.

  16. #20 has been fixed since before the Y2K problem even came up.

Leave a Reply