Bloggers thrash Apple analysts at predicting stuff

screen-shot-2011-01-18-at-10-03-15-pm.png Quoting analysts is a common way to make tech news look less driven by opinion or corporate PR: the analyst gets identified as an expert and the reporter gets a second source. But the deal is a rotten one, because analysts often seem no more aware of industry goings-on than the reporters who quote them. Now, if that were true, you'd expect some objective survey of professional analysts' predictions to come out no better than that of bloggers. Perhaps a couple of brilliant ones would kick ass and justify the herd's status as dispensers of wisdom for media and investors. Turns out that the pros are, according to Fortune, almost all worse than amateurs. The difference is so stark it can hardly be accounted for by chance: either Fortune's methodology was contrived to make them look bad, or professional analysts have a systematic bias toward saying things that turn out to be inaccurate. [Fortune via Daring Fireball]

41

  1. Educated people, especially people with MAs and PhDs, tend to rely on what they learned in school. Their diplomas buy them so much in life that they get lazy and don’t bother looking for other information, especially information that might reveal they aren’t as savvy as they think they are.

    An uneducated person trying to make this kind of prediction is more likely to look at every possible angle and make more of an effort to educate themselves before spouting off.

    1. PhD almost never rely on what they learned in school because they usually generate novel research themselves, challenging assumptions and generating new hypotheses.

      I guess uneducated people are just uneducated.

    2. Seems many (most?) of the bloggers are academics as well. I would think that the arrogance comes with title/position, rather than education/diploma.

    3. That’s pretty much my experience with PhDs too. Even worse, they get fixated on some fad – which they often originated – and refuse to entertain any contrary evidence. Either that or they’re ultra-specialists. Academic computing faculties seem to be totally disconnected from industry realities too. I remember the “Eiffel” computing language (a sweet language too BTW) failing because the PhDs involved had never heard of proper version control and configuration management. I always have to teach the basics of that to anyone fresh out of school: in addition to proper testing, documentation, basic English grammar and any programming paradigm other than the latest version of OO that started about 13 years ago. This last because either none of the code they’ll be maintaining will be written that way (OO Cobol anyone?) or because it’s death on constrained platforms like iPods or encryption chips.

      As for the Apple analysts, I totally stopped listening to them after it became clear that all they were ever doing is mindlessly parroting the line that Apple didn’t have the bulk of the market share and so should jump into the WinDoze arena with MicroSoft. Market share was all they seemed to understand: not profitability or innovation.

    4. Huh, yeah. That would be very interesting, if it weren’t completely and utterly false.

      Of the five top bloggers in this list that I could get education info on, one has a PhD in mathematics and is a professor in Paris and a postdoc in Stockholm, one has an MBA for Harvard, one got a Harkness research Fellowship after their degree, and one has a post-graduate degree at UCLA School of Law. Only one of those five only had a bachelors.

      Maybe you should wonder why, in fact, it seems to be all the highly educated folk that are scoring all the home runs?

    5. I agree to a point. it’s like that study they just recently did that found that atheists know more about christianity than hardcore christians.
      the idea being the same, the christians have stopped looking for answers because they believe they already have them while atheists look into, analyze and research all of it because nothing is taken for granted, assumed or pre-conceived.

      not sure that entirely applies but I think there is some of that going on here if these analysts are “going with their gut” more than they are doing actual research.

  2. “either Fortune’s methodology was contrived to make them look bad, or professional analysts have a systematic bias toward saying things that turn out to be inaccurate.”

    If by ‘systemic bias’ you mean endemic corruption and criminal fraudulence, then yes.

  3. Why, if highly paid athletes come with stats that they can use to justify their shocking salaries (compared to that of a teacher say), can’t overpaid analysts (who actually contribute to bad policy decisions) have to objectively prove their value? I’ve never understood the lack of demand for this most basic of information. Humans suck at dealing with unpredictability…

  4. Is this really an analysis of just one quarter?

    If so given that it was an unexpectedly good quarter it could be that bloggers are generally more enthusiastic and thus predict better results and thus did well with a good quarter. I’d hesitate to draw strong conclusions without more quarters of data.

    Not that I think the professional analysts are great.

  5. My theory: the analysts have to make their predictions whenever reporters approach them for comment, whereas the bloggers post their predictions when they get actual information to base them on.

  6. One theory: I was talking with a friend the other day about the value of the blogosphere – people with specialised knowledge commenting on areas they have a special interest in/passion for. Compare this with day-job journalists and financial analysts – they have a wide remit, and today’s forecast/article is one of many they have to do, and may have no reflection on their own specialisation or research habits.

  7. I think the title to this article should have been: “Apple bloggers thrash most professional financial analysts at predicting Apple stuff” (or something similar).

    It also seems to me that the mentioned bloggers (all amateur analysts listed) are only analyzing ONE specific company, i.e. Apple.

    On the professional side, you have (for example) Yair Reiner (one of the professionals who did well in the quarter mentioned, and the quarters linked to by Niklas) who is the only guy who covers Applied Technology at Oppenheimer. So he also covers Garmin, Trimble, Synaptics, Cree, and Omnivision, and if asked by his boss could probably give a pretty good guestimate on most companies in the field of applied technology.

    Another analyst is this Vijay Rakesh at Sterne Agee. Sterne Agee has 29 analysts that covers 373 companies in 19 industries (vs. 40 analysts for Oppenheimer). It means that an analyst at Sterne Agee covers in average more than 10 companies. A search on Vijay Rakesh of Sterne Agee shows that, beside Apple, he also covers at least SanDisk and Intel. That’s some pretty heavy coverage.

    Looking at the numbers, and having the range in mind, the professional financial analysts are doing a pretty good job. Are they doing better than the amateur Apple analysts? In most cases, no. Now, if Yair Reiner or Vijay Rakesh would be allowed to cover ONLY Apple, would they do better than the amateur analysts? We don’t know. But I would guess, from looking at the numbers and having the workload and range in mind, that they would “thrash most (if not ALL) amateur analysts”.

    So is this whole comparison fair? It would have been if also the professionals had covered only Apple.

    1. “So is this whole comparison fair? It would have been if also the professionals had covered only Apple.”

      You have to analyze the whole market, not only Apple since Apple does not exist in a vacuum unaffected by everything happening around it. Does the “amateur” analysts spend as much time analyzing the market? Well, this is what I could find about the amount of time and energy spent on analysis from the top “amateurs”:

      Patrick Smellie – works full time as a journalist and does tech analysis on his spare time
      Turley Muller – averages about one blog post per month
      Alexis Cabot – covers the entire web/tech industry it seems
      Nicolae Mihalache – does financial analysis on his free time, is a mathematics doctorate
      Andy Zaky – averages 2.5 blog posts per month
      Daniel Tello – averages two blog posts per month

  8. One theory: I was talking with a friend the other day about the value of the blogosphere – people with specialised knowledge commenting on areas they have a special interest in/passion for. Compare this with day-job journalists and financial analysts – they have a wide remit, and today’s forecast/article is one of many they have to do, and may have no reflection on their own specialisation or research habits.

    A good teacher once explained the difference between and amateur musician and a professional musician like so: An amateur musician, on a good day, can probably play their favorite song better than a professional musician. But a professional musician must also be able to competantly play music they dislike or even hate, on bad days as well as good days, whether in the mood or not, and on someone else’s schedule.

    1. I’m pretty sure that “Apple Finance Board” refers to a web forum (Yahoo Finance, fool.com, etc), not any actual organization affiliated with Apple.

      1. Actually, reading the article, they’re specifically referring to the “Apple Finance Board” forum on macobserver.com.

  9. Good thing they didn’t include John Dvorak on there — he might have singlehandedly swung the results the other way.

  10. This is news???

    In David Dreman’s Contrarian Investment Strategies,he explains that over 80% of professional stock analysts get it wrong when it comes to predicting quarterly earnings. This is one of the most spectacular failure rates for any discipline (except maybe education, which constantly rejigs its results :-)

    This means that all you have to do to be a successful investor is the opposite of what a stock analyst predicts and you’ll do fine.

    1. Umm… that depends on what the odds of “getting it right” actually are. If the random odds are 1 in 100, the analysts are doing amazingly with a 20% (1 in five) hit rate.

      In other words, getting it “right” vs. getting it “wrong” may not be the 50/50 proposition you imply. If I ask you to guess what a stock price will be in a week, you can guess thousands of wrong guesses but only one right one. An 80% “wrong” average looks pretty good in that context.

      I don’t really like analysts either, and I’m not saying they beat the market – I’m just saying that more information than that single stat is required to make the case.

      1. From a statistical standpoint I agree that 20-25% successful prediction likely represents significance.

        However an 80-85% failure rate in prediction does not make for a sane investment strategy, and this is what the analyst is selling (their ability to pick stocks for portfolios based on earnings predictions). This is a pretty high stakes game…if you can get it right you can become indescribably rich. Bloggers are more likely doing this for fun or the attention and I would imagine are mostly not professional analysts or fund managers.

        There are many more factors as you say, social behaviour trumping balance sheet fundamentals being the critical ones (the arguments of Dreman and others like Ben Graham).

        My rule of thumb is when overvalued stocks like Apple get strong buy ratings (higher target prices) its a good sign to sell but rarely a good time to buy.

  11. This means that all you have to do to be a successful investor is the opposite of what a stock analyst predicts and you’ll do fine.

    What I came here to say. I’ve seen this backed by the numbers. (And since brokers have to make their recommendation history public now, you can find the same numbers yourself.)

    Of course, that ignores the larger picture of whether it might be better at a particular time to hold bonds, precious metals or cash over stocks. Sometimes* “outperform the market” means “lose your shirt a bit less slowly than all the others”.

    *Pretty much most of the aughties, for example.

  12. Now let’s see what happens if you add in professionals that don’t work for banks or brokerage houses (in most cases same as banks).

    Yes, they do exist.

  13. Outside of linear systems – like predicting eclipses – a good rule of thumb is that nobody can predict anything about anything. If you have any doubt about that, read Dan Gardner’s excellent book Future Babble.

    In any group of “experts” making predictions, some small percentage are going to be proved correct based on pure chance. I wonder what the ratio of bloggers to Apple analysts is? 10:1? 100:1? 1000:1? My guess would be closer to the latter. No wonder more bloggers are proved correct. There are way, way more of them.

    1. Outside of linear systems – like predicting eclipses – a good rule of thumb is that nobody can predict anything about anything.

      That’s not entirely true; though it’s certainly tempting to think that. Complex systems pose a huge challenge for prediction; but it would be inaccurate to say that prediction is impossible. Using sophisticated computer models it’s possible to predict the outcome of complex processes with some degree of accuracy (though with a large error component), provided that you understand how the system works and have access to the necessary data.

      Take a weather forecast, for example. The atmosphere is a complex system; but meteorologists understand it reasonably well now, and have developed fairly sophisticated forecast models that can do a pretty good job of predicting the weather up to two or three days in advance with a reasonable degree of accuracy (though not with perfect accuracy, of course). Long-term forecasts are still very “iffy” because the atmosphere is such a complex system; but weather forecasts are getting more accurate all the time as the computer models are improved.

      Perhaps a better example is the work of political scientist Bruce Bueno de Mesquita, who has developed a very sophisticated computer model based on game theory and expected utility that is able to make very specific predictions about the outcome of political and other decision-making processes, with about 90% accuracy. As long as you know who the key players are, what their preferences are, and how much influence they wield, you can feed the data into Bueno de Mesquita’s computer model and it will tell you what the most likely outcome will be. Given the right data, the model can even make long-term predictions, years or even decades in advance.

      However, Bueno de Mesquita has specifically said that his model can’t predict market-driven events, like stock prices, because there are too many players in the game for any model to be able to keep track of them all; and, even if the model could keep track of all of those players, we don’t have accurate data about their preferences and influence. The error component of the model would be so large that it would have no predictive value. So, personally, I wouldn’t trust anyone who claimed to be able to predict the outcome of market-driven events.

      1. The literature on predictive ability in various professions shows some skill in the short term. Weather prediction is a good example. On the other hand, some professions show no evidence of predictive skill. Finance and stock professionals fit into this class. And over the long term, nobody has any skill at all.

        Your provisions of understanding the system and having access to the data are pretty gigantic. In any system involving some degree of randomness, they make effective predition impossible over the long term.

        The book I referenced earlier (Future babble) examines and debunks de Mesquita’s claims pretty thoroughly.

        1. Well, I’m a big fan of Dan Gardner’s work (in fact, I would recommend that everyone read The Science of Fear – one of the most important books written in recent years); but I haven’t read Future Babble yet; so I can’t address Gardner’s critique of Bueno de Mesquita’s work per se.

          But I have read many other critiques of Bruce Bueno de Mesquita’s methods; and most of them are based on misunderstandings (or outright misrepresentations) of how his predictive model works, and what he claims that it can do. In fact, some of these critiques are so absurd as to be laughable. For example, I recently read a critique that, in all seriousness, dismissed BDM’s computer model as worthless because the accuracy of the predictions it makes depends on the accuracy of the data you feed into the model: If you input unreliable data, the model makes unreliable predictions. Well, duh! That’s like saying that meteorological models are useless because they won’t give you a reliable weather forecast if you input the latest football scores into the computer instead of accurate data about current atmospheric conditions. Even Newton’s formulas won’t give you the right answer if you input the wrong data (as NASA discovered a few years ago when it lost a Mars probe because someone failed to convert units from English into metric).

          Others have critiqued BDM’s model because it relies on data gleaned from subject area experts. They argue that, if the experts have reliable data, they ought to be able to make accurate predictions without the use of BDM’s computer model; and if the experts can’t make accurate predictions, then we shouldn’t trust the data they provide to Bueno de Mesquita. That critique is based on a complete misunderstanding of how BDM’s model works. The only data that has to be provided by subject area experts are: (1) Who are the key players who will exert influence on the outcome of a given decision process? (2) How much influence does each of those key players wield? (3) What are the preferences of each of those players (i.e. what outcome does each player want)? and (4) How committed is each of those players to getting his or her preferred outcome (i.e. how much time and effort is each player willing to put into influencing the decision process; how likely is each player to stand his or her ground or to accept a compromise)? That’s the only data that BDM solicits from subject area experts. He doesn’t ask those experts to make predictions, or to speculate about what might happen. He doesn’t care about the zeitgeist or the “conventional wisdom”. He doesn’t ask for insider information or state secrets. He doesn’t even need to know anything about the history, culture, or ideology that lies behind the decision process. He simply asks about the key players, their influence, their preferences, and the strength of those preferences.

          The truly interesting thing is that, using only data provided by the subject area experts, BDM’s model frequently predicts outcomes that the subject area experts themselves would not predict. In fact, the experts who provide the data often express strong disagreement with the predictions that BDM’s model makes using those data. Yet the predictions made by the model tend to be more accurate than the predictions made by the experts. (In fact, at least one study suggests that, when the predictions made by BDM’s model differ from the predictions made by subject area experts, the predictions made by the computer model are about twice as likely to be accurate as those made by the experts.) Even though BDM’s model uses data that are available to the subject area experts – and often uses data provided by those experts – it analyzes that data differently than the experts do. That’s why it is silly to suggest that, if the experts already know all of the relevant information, that BDM’s computer model doesn’t add anything of value; or that if the predictions of the experts are wrong, then the data they provided to BDM must also be unreliable.

          These criticisms are misguided enough; but perhaps the most egregious criticism I’ve ever come across focused not on how BDM’s model works or what it has predicted, but on how his work has been reported in the media – as if this has any bearing at all on the value of the work itself. The fact that a reporter may have been a bit careless (and perhaps even a bit clueless) in his characterization of BDM’s work in a magazine article that was aimed at a non-scholarly audience does not undermine the credibility of Bueno de Mesquita himself. The controversy arose over BDM’s predictions about who would succeed Ayatollah Khomeini as Iran’s supreme leader. BDM, who is not an Iran expert, correctly predicted that Khomeini would be succeeded by Ayatollahs Khamenei and Rafsanjani (who were little known outside of Iran at that time), whereas many of the leading Iran experts in the West were predicting a different successor. The reporter writing about this in a magazine article about Bueno de Mesquita may have been a bit too hyperbolic when describing how BDM’s computer model was able to predict an outcome that no one else could possibly have predicted, and how all of the experts were wrong when BDM was right. Perhaps Khamenei and Rafsanjani were not quite as obscure as the reporter made them sound in the article (though it is true that their names were not well known in the West at that time); and perhaps BDM’s prediction was not quite as radical as the reporter suggested. Perhaps there were some Iran experts who were not surprised by Khamenei’s and Rafsanjani’s rise to power, and thus would not have been too critical of BDM’s prediction. And perhaps the clash between BDM and the Iran experts was not quite as dramatic as the reporter characterized it. So, perhaps the reporter does deserve a bit of a slap on the wrist for hyping the story to make it sound more exciting than it really was. But this doesn’t change the basic facts: BDM’s prediction turned out to be correct; whereas the consensus prediction of the leading Iran experts in the West turned out to be wrong. So, it’s a bit silly to criticize BDM’s methods simply because a reporter used slightly hyperbolic language when telling an anecdote about how effective those methods are.

          There have been some perfectly reasonable, scholarly critiques of BDM’s methods. Most of those critiques focus on the assumptions built into BDM’s predictive model, and the usefulness of that model as a tool for helping us understand political phenomena. But these scholarly critiques don’t attack the model as fraudulent – it has far too much empirical support for that. The folks who attack BDM’s model as fraudulent are not engaged in honest scholarship. They’re engaged in the same sort of dishonest enterprise as the climate change denialists who want to “prove” that global warming is a hoax by picking apart every model that climate scientists use to show that the planet is warming due to increasing levels of greenhouse gasses that humans are pumping into the atmosphere. I’m not saying that Dan Gardner falls into this category – as I said, I haven’t yet read his latest book; and, based on his previous work, he doesn’t strike me as the sort of thinker who would engage in dishonest scholarship (though, admittedly, he’s a journalist, not an academic). But I’ve yet to read a single critique of BDM’s methods that has managed to demonstrate, using empirical evidence, that his predictive models don’t actually work, or that they are no better than the ad hoc predictions made by subject area experts. But, I’ll gladly take a look at what Dan Gardner has to say. (As I said, I’m a fan of his work.)

  14. Without knowing when the bloggers chose and whether they knew of others choices it’s impossible to judge.

    All I would do as a potential blogger, knowing that most analysts are under is add 10% to the figure they chose. Since analysts are mostly conservative with Apple it will make me look good.

    Next quarter let all the bloggers select a figure then let the analysts loose and I’m sure the analysts will be more accurate.

  15. Keep in mind that market prices are part of a self-modifying system. If these amateur analysts are doing so much better, it may be because people are listening to them.

  16. Its not about being right or wrong.

    Its about being read/heard/seen.

    The only metric that matters is: “How many people can you bring to our advertisers?”

    How else can you explain the advertising rates of a Rush Limbaugh?

    Obviously facts or common sense don’t enter into it.

  17. These results should be expected. I’m an economist, trader, blogger. I’ve seen all angles. I’ve worked for the options exchange, a major investment firm, and I’ve been blogging about the Great Apple Turnaround for 12 years.

    Look, you have to understand what it is you’re comparing here. Stock analysts aren’t working for investment research firms in order to give ACCURATE ADVICE to reporters for free. Bloggers are. An immature… err amateur blogger has figured out that a certain company (apple, the maker of their blogging device) is turning heads and changing minds. They read some books and look at some stats. Every month and quarter the company posts easy to follow metrics (Revenue up 25% over and over and over, Profit up 25% over and over and over). The blogger says “I SEE THE REALITY OF THE OBVIOUS” and they post about it.

    Meanwhile, the professional stock market analyst is working for a company that advises all kinds of different players in the financial market. If an analyst thought that a company was going to outperform the market for months to come why would they YELL FIRE! All that would do is get the attention of everyone else and cause the stock to get overvalued. They wouldn’t be able to do what they advise their overlords to do. buy low, sell high, buy low, sell high… month after month after month. Makes for smooth trading profits. Bloggers should be more accurate than analysts. Analysts have different products, FREE ADVICE, FREE REPORTS, CLIENT REPORTS, INTERNAL REPORTS. They also have these things called ‘conversations with management’.

    What kind of capitalistic investment centered world do we live in? Why would Shaw Wu or Charlie Needham answer the phone for NYTimes and tell the clear unvarnished truth? This ain’t the weekend forecast for rain, this is multibillion dollar international finance.

    Mock not, lest ye be mocked upon. The whole point is obfuscation. It’s like poker. “Ohh gosh, the dealer said that I looked like I was going to have a LOT OF LUCK today! but then I lost!”

    -John

    1. You are assuming that anyone – the bloggers or the experts – can actually predict anything in the world of finance. The literature says otherwise. Those who do make accurate predictions are just like the lucky monkey who happens to type out Romeo and Juliette. Impressive, unless there are an awful lot of typing monkeys.

      1. “You are assuming that anyone – the bloggers or the experts – can actually predict anything in the world of finance. The literature says otherwise.”

        that’s right.

        you are assuming one cannot. it’s a popular assumption substantiated by a pedestrian understanding of statistical analysis. it’s also written about extensively in ‘the literature’.
        i’m fairly certain that if you asked a dozen of your friends and a handful of people you
        feel are ‘knowledgeable about these matters’, they would agree with you. upon completion
        of your survey you would reenforce your belief and could, in the future, refer to your
        completed research on the matter to dismiss claims to the contrary.

        some monkeys turn out more interesting documents than randomness would suggest.

  18. if Forbes had been reading ABERDEEN analysts, they would have seen signs of Apple’s unstoppable momentum (in the enterprise at least) appear a couple of years ago.

Comments are closed.