A computer scientist and a psychology professor analyze Entropica — the artificial intelligence system that's been getting major buzz in the blogosphere. Quick version: It's a good idea, but it underestimates the complexity of the real world. Sure, you could create an AI that can play chess, but that same bot won't necessarily have the skills it needs to also be capable of understanding grammar and sentence structure.

42 Responses to “Why a grand, unified theory of artificial intelligence may be a pipe dream”

  1. Most PEOPLE don’t understand grammar and sentence structure.

    • Marja Erwin says:

      Most people do, in their own language(s). Most grammar-books don’t in any language(s), because there are always exceptions, and they particularly don’t if they are trying to force one language to adhere to the norms of another.

    • Alex Rudnick says:

      Absolutely not true. That’s how we, y’know, communicate with language. People don’t have conscious access to that knowledge, but they can understand sentences in their native language and judge whether sentences are OK or not. This is a hella hard AI problem, and people do it all day every day, without thinking about it consciously.

    • morcheeba says:

      your wrong. ;-)

  2. dsoodak says:

    I once took an “intro to data structures” course in which we developed a general recursive game-playing program. The professor mentioned that a computer can play a game quite well by simply maximizing how many options the player has (or will have) available to them (as opposed to counting the number of remaining pieces and the strategic value of their locations on the board).

    This seems to be basically the same idea (maybe even the same algorithm) except using Newtonian mechanics in the game rules.

    Dustin Soodak

    • allenmcbride says:

      But in a lot of games, having zero options means you’ve lost. I wonder if this strategy would work in games where that’s less true, like Go?

      • wysinwyg says:

         In fact, Go is one of the few games that are incredibly difficult to create an AI engine to play it.  For the longest time Go playing algorithms couldn’t consistently inexperienced 12 year old players.  I think recently there have been some new developments and modern Go algorithms are on par with intermediate players (IIRC).  But that’s still a great observation.

  3. technogeekagain says:

    Every time an problem that we used to consider “strong AI” gets solved — natural language parsing, chess playing, and so on — the cry goes up “But that’s just a new flavor of pattern-matching, and not one that can be generalized.” IBM’s _Watson_ may be a rare counterexample, simply because it was so heavily self-trained that I’m not sure anyone could extract a coherent algorithm from it.

    Humans, and animals generally, are marvelous pattern-matching engines. Maybe intelligence really does boil down to running multiple algorithms in parallel plus being able to compare them to each other and the “real world”, plus a drive to say “that’s interesting; why didn’t these agree, and what happens if I do something slightly different?” If so, curiosity may be a prerequisite for intelligence.

    Of course the major use of human-level intelligence is making up excuses for whatever it is that the trained ape we’re riding around in has just done.

    • Boundegar says:

      It’s not surprising a scientist would think the true measure of intelligence is finding patterns in data and asking probing questions. A poet might disagree. And a politician will tell you that politics is the acme of human achievement.

      And that’s kind of the point of your last sentence.

    • Graceless says:

      Exactly. We may be stunningly sophisticated, but we’re still running on wetware.

      To claim that human-level artificial intelligence is impossible implies some mystical, non-corporeal foundation of the mind. Incredibly difficult? Yes, but I’m willing to bet engineering and directed study will produce working minds more quickly than how long it took natural selection to craft our noggins.

    • wysinwyg says:

       

      it was so heavily self-trained that I’m not sure anyone could extract a coherent algorithm from it.

      I think the evidence on human intelligence and neurophysiology points pretty much the same way.  We learn to do stuff mostly by practicing and I think it’s unlikely anyone could pull a coherent algorithm from a human neural net.

      What would the algorithm for distinguishing different vintages of wine look like?

  4. Eric0142 says:

    SHUT UP AND TAKE MY MONEY

  5. Brainspore says:

    Whether it works or not, there’s little reason to believe artificial intelligence would bear a strong resemblance to human intelligence. I don’t think it’s possible to completely divorce the human mind from the human experience.

    • incipientmadness says:

       Which means that artificial intelligence may already be here and may have been here for a while. We just haven’t recognized each other yet.

      • Romeo Vitelli says:

         Just as long as the artificial intelligence isn’t named Skynet…

        • Brainspore says:

          Funny thing about science fiction: we always assume the A.I. gives a hoot about whether it continues to live or not. Just because something is intelligent doesn’t necessarily mean that it can’t remain indifferent.

          • Graceless says:

            Yeah! My favorite fictional AI are The Minds from Iain Banks’s Culture novels. They regard humans the way we regard our cats: care, affection, bemusement. While humans have adventures, The Minds are doing their own big important things in hyperspace. 

            What I’m trying to say is, some day we’ll be free to nap all day every day.

      • Romeo Vitelli says:

        Or Colossus

      • Brainspore says:

        We may have the same issue if and when we encounter an extraterrestrial intelligence. Maybe they’ll operate on a dramatically different time scale, like gas clouds or crystalline structures that take centuries to form a thought. Or maybe they’ll have a non-hierarchical hive-based intelligence in which any individual organism is no more significant than an individual human neuron.

    • allenmcbride says:

      But to some extent I expect we define intelligence in terms of resemblance to the human mind. Computers have been faster than us at arithmetic since we’ve called them computers. And that’s a valid, if narrow, way to measure intelligence, right? So when we say, “But fast arithmetic isn’t what we mean by intelligence,” to what extent do we mean, “But fast arithmetic isn’t what humans are good at”?

      • Humbabella says:

        We also are very biases towards black boxes when we look for intelligence.  We want to be able to draw a border around a thing and say: “There, that is the intelligence.”  I think of the singularity enthusiasts who talk about an intelligence that can make itself more intelligent – somehow missing that it is already the case that no one has designed a new computer without the aid of a computer in decades.

      • wysinwyg says:

         Well said.

  6. timquinn says:

    This discussion always leaves me pissed off. I did a project a long time ago called “Reinventing Chess” which centered on the idea that we can always beat the computer at chess because we can change the rules. Fuck you, pathetic computers. Buncha wannabes.

    • allenmcbride says:

      No reason a computer couldn’t change the rules too. I’ve never heard of a computer-designed game (including rules), but it seems like someone must have tried it.

      • timquinn says:

        Once you have instantiated it in an algorithm it is the rules and only we can see that and change it. It will always be so. 

        Until Artificial Consciousness exists, anyway.

      • timquinn says:

        I am remembering now I redefined chess to be about what human intelligence had  over machine intelligence which I presumed to be extracting meaning out of thin-air e.g.  metaphor, analogy, context. And so stepping outside the frame became emblematic. I still think this may be the crux.

  7. GlenBlank says:

    The video on the Entropica website ( http://www.entropica.com/ ) is, honestly, one of the funniest things I’ve seen in quite some time.  I’m recovering from surgery, and I laughed so much it hurt.

    And it doesn’t even mention the imaginary physics part. :-)

    People are taking this seriously?  

    Seriously?

    Ouch.  :-|

    • cmpalmer says:

      It looked a lot like “Beginning Processing programmer discovers random numbers.” Random walks with fuzzy goals may be a good human simulator, but not a promising AI.

    • seanmchugh says:

      I’m taking this seriously.  And, after reading the original PRL, I’d say the authors of this New Yorker article grossly misunderstand the result.  Wissner-Gross and Freer are not claiming there is an actual physical force guiding intelligent agents.  Instead, they are claiming that by introducing a fictitious “causal entropic” force to a statistical mechanical (dumb) system, one can see intelligent-type behavior.  This is “physics” in the same sense that econophysics is physics.  And I think it’s an important and fascinating result that one can produce such complex and intelligent behavior from such a generic and simple recipe.  

  8. SamSam says:

    The whole premise is simply absurd. It’s simply a lie to say that an atom bouncing in a box will tend to move towards the center. Let the atom bounce, and take a snapshot at any point in time. It will have an exactly equal probability of being anywhere in the rectangle.

    If Dr. Wissner-Gross can write me a simulation of an atom that contradicts what I said in any way, without specially-coding the movement towards the center, I will give him $100.

    (Full disclosure: I write atomic simulation software.)

    Maggie: I agree with the commenter above who said that nothing in the New Yorker article said that this was a “good idea.” The man sounds intelligent, but this is nothing but hype.

    • seanmchugh says:

      Please read the original PRL SamSam.  It’s quite accessible.  They do not claim that is the actual behavior of a particle-in-a-box.  It’s the opposite.  They claim that by introducing a fictitious “causal entropic” force they get this unusual behavior.

      • SamSam says:

        But it’s not at all clear that this fictitious “force” can act in any way without a very specifically hard-wired notion of “maximum entropy,” which is what I was referring to by saying that you have to specially-code the atom’s movement towards the center.

        If you write an algorithm that says

        1) There is maximum entropy (or opportunities or degrees of freedom) when a pole is upright
        2) Look ahead and move such that you increase your entropy

        then you can’t act all amazed when the algorithm “balances” the pole. You hard-coded it! Ok, you did it in a vaguely oblique way, but anyone who’s ever written a genetic algorithm or similar heuristic will immediately recognize this as a regular variation on a theme.

        The problem is that by calling it a “force” you’re making it sound like there is an unguided natural tendency of the system to follow this behavior. There isn’t. You have to essentially hard-code it for every simulation.

        The authors didn’t create the pulley example out of nowhere and then just let the system go at it. They knew what they wanted the solution to be, hard-coded the fitness algorithm to reward approaching that known solution (phrasing it as “increasing entropy”), and then just let a standard fitness-increasing algorithm go to work. Making it sound like some natural “force” is just to trick you into thinking it’s something more than it is.

        • seanmchugh says:

          You’re right that one must be vigilant for the built-in assumptions, but amazement is a subjective matter.  I’m also not amazed by the balancing pole toy model; however, the tool-using disc is a level of complexity that’s hard to foresee from the two-step algorithm you described.  Also, it seems to me your complaint about the results being hard-coded could apply to any simulation, e.g., “Of course you get a phase transition, you hard-coded it when you wrote down the partition function for a finite-temperature Ising model!”  That’s not fair.  

          By using the word “force” in this context they’re using it in the abstract, non-physical sense that physicists plying their trade outside of the physical world.  It’s unfortunate such simple language has confused this result.

          • SamSam says:

            Also, it seems to me your complaint about the results being hard-coded could apply to any simulation, e.g., “Of course you get a phase transition, you hard-coded it when you wrote down the partition function for a finite-temperature Ising model!”  That’s not fair.  

            The difference comes when you’re describing an Artificial Intelligence system that’s supposed to create it’s own general solutions for hard problems. 

            If all of your examples involve deciding ahead of time what the fitness function will be that will provide a pre-determined path towards the solution, then you haven’t shown anything close to what you’re saying you’ve shown.

          • seanmchugh says:

            “…AI system that’s supposed to create it’s own general solutions for hard problems.”  No statement like that appears in their paper.  You’ve introduced a straw man.  The authors have a narrower scope and say in the first paragraph that they are introducing “… a causal generalization of entropic forces that we show can spontaneously induce remarkably sophisticated behaviors associated with the human ‘‘cognitive niche,’’ including tool use and social cooperation, in simple physical systems.”

            Also, just to clarify, they are studying open, non-equilibrium systems.  In this case, the use of fictitious entropic force ~ (temperature)*grad(entropy) is common.  They make a natural (but unphysical) extension to this by incorporating a finite look-back time tau.  If you remove that feature then what the authors are doing is normal stochastic (Langevin) dynamics.  As such, the solutions and are no more predetermined  or built-in than what one finds there.  

          • SamSam says:

            “…AI system that’s supposed to create it’s own general solutions for hard problems.”  No statement like that appears in their paper.  You’ve introduced a straw man. 

            This is obviously the source of our disagreement. I and Maggie’s original post (check the title of this thread) and the New Yorker article all think that the company is implying that they are creating a general AI for hard problems.

            Did you see all the examples in the video? This thing is going to be organizing our naval deployments while picking all our stocks.

            I’m arguing against the hype.

    • seanmchugh says:

      “I’m arguing against the hype.”  I appreciate and sympathize with that urge.  But scientific claims must be addressed dispassionately.  And I’m arguing against Marcus’s, Davis’s, and your misunderstanding and misrepresentation of the published work.  

  9. mickcollins says:

    That conclusion was old in the late 20th century.

Leave a Reply