Comfort with meaninglessness the key to good programmers

Ed. Note: Boing Boing's current guestblogger Clay Shirky is the author of Here Comes Everybody: The Power of Organizing Without Organizations. He teaches at the Interactive Telecommunications Program at NYU, where he works on the overlap of social and technological networks.


It is famously difficult to teach people to program, and CS lore says that there are simply people who get it and people who don't. Saeed Dehnadi and Richard Bornat, two computer instructors at Middlesex University in the UK, put that idea to the test, and ended up not with two kinds of people, but three.

They devised a basic aptitude test for first year students of computer programming, and then administered it on the first day of class, before the students had learned anything. (One of them maintains this was a mistake, the other claims it was planned.) The result was an almost perfect correlation between the results of the test and the student's subsequent performance.

The test asked simple questions about assignments (example shown in the image above.) The group tested broke down into three camps: people who answered the questions using different mental models for different questions, people who answered using a consistent model, and people who didn't answer the questions at all:

Told that there were three groups and how they were distinguished, but not told their relative sizes, we have found that computer scientists and other programmers have almost all predicted that the blank group would be more successful in the course exam than the others: “they had the sense to refuse to answer questions which they couldn’t understand” is a typical explanation. Non-programming social scientists, mathematicians and historians, given the same information, almost all pick the inconsistent group: “they show intelligence by picking methods to suit the problem” is the sort of thing they say. Very few, so far, have predicted that the consistent group would be the most successful. Remarkably, it is the consistent group, and almost exclusively the consistent group, that is successful.

Interestingly, this correlation is unrelated to correctness -- being consistently wrong in your mental model of how a computer works is better than being inconsistently right, because if you are consistently wrong, you only have to learn one thing to start being consistently right.

Dehnadi and Bornat's thesis is that the single biggest predictor of likely aptitude for programming is a deep comfort with meaninglessness:

To write a computer program you have to come to terms with this, to accept that whatever you might want the program to mean, the machine will blindly follow its meaningless rules and come to some meaningless conclusion. In the test the consistent group showed a pre-acceptance of this fact: they are capable of seeing mathematical calculation problems in terms of rules, and can follow those rules wheresoever they may lead. The inconsistent group, on the other hand, looks for meaning where it is not. The blank group knows that it is looking at meaninglessness, and refuses to deal with it.

(It will be interesting to see how long it will be in the comments before someone chimes in with the snake oil of the industry: "But method X/language Y is so intuitive that it solves this problem!" Dehnadi and Bornat's literature review should be required reading for this group.)

Dehnadi and Bornat's programming aptitude research

UPDATE: In the comments, Greebo points to research trying and failing to replicate the salience of consistency as a predictor, in a paper suggesting that "...the consistent group may actually contain two distinct subgroups, one that does much better than the inconsistent group, and one that does much worse." That paper is also interesting for its engagement with the larger issue of replication of experiments involving humans, as they were not able to fully replicate the research (self-selecting group, not given on first day of class, etc...) and use those issues as a platform for illustrating the difficulties with this kind of research generally.

On the Difficulty of Replicating Human Subjects Studies in Software Engineering