Blue Brain Project: Build a virtual brain in a supercomputer

Discuss

60 Responses to “Blue Brain Project: Build a virtual brain in a supercomputer”

  1. daen says:

    Quite. Their publication history is … weird, to say the least. Only five papers in the last two and a half years, nothing at all in 2010, a whole slew in 2008. The publications seem interesting enough, mostly neurobiology based (some neurobioinformatics), and reassuring to see PNAS in there. But such a poor publishing record doesn’t square with the whole ZOMG!-ness of the website, though, I’m afraid …

  2. 2k says:

    Perhaps I’m reading too much into this but the goal would seem to be a simulation of consciousness.
    If it turns out that quantum processes play an important role in the generation of consciousness, won’t these scientists have to embed their virtual brain in a virtual body in a damn good, quantum-capable virtual reality?
    Can we make one of those? Physics not being finished yet, an’ all.

    • SamSam says:

      The hypothesis that quantum processes play a role in the generation of consciousness is just a hypothesis by Penrose, and has almost no mainstream backing among the neuroscience community (unless things have changed greatly in the few years since I got my degree). Really, I read Emperor’s New Mind and thought it the biggest pancake stack of unsupported assumptions I had ever read…

      • daen says:

        I read Emperor’s New Mind and thought it the biggest pancake stack of unsupported assumptions I had ever read

        The most interesting take-home message I got from it was that consciousness is not algorithmic. Penrose makes a good argument for that in the section on Turing machines.

        • SamSam says:

          Frankly, he doesn’t. At least not the argument I remember. If you’re referring to the argument that a Turing machine will always be “inferior” to a brain because it can’t prove Gödel’s incompleteness theorem, then that’s been proven completely wrong, since it’s been shown many times that Turing machines can prove it using higher-order logic. Gödel’s theorem is, after all, essentially algorithmic. I wrote a paper on this in college, and could probably dig it up if you wanted more sources, but Martin Davis and Feferman (spelling?) both give clear rebuttals of Penrose’s arguments.

          • daen says:

            Actually, it was Turing’s proof of the non-existence of a general solution to the halting problem that I was thinking of, not Gödel. The question that Penrose raises is, if the human mind were algorithmic, how could it find a proof of the non-existence of an algorithmic solution algorithmically? It is, I think, the crux of the book.

          • Anonymous says:

            Why can’t you algorithmically find a proof of the non-existence of an algorithmic solution? Heck, here’s a trivial method:

            10 Write out a random string of symbols.
            20 Check if it’s a proof of the non-existence of a solution.
            30 If not, goto 10

            It will be unbelievably slow, far too slow to be useful in practice, but it still shows there’s nothing inherently impossible about this being done algorithmically. Why does Penrose think there is?

  3. daen says:

    I’m more than a bit dubious of this top-down approach. One of their stated goals is to develop understanding of brain diseases. But even if they are successful in emulating some of the behaviour of healthy neurons, that gives you zero information about anything other than electrical signalling and synaptic structure in a healthy brain. It tells you nothing about why wet, organic, genetically wonky, chemistry and protein-based, diseased brains behave the way they do. It’s a useful first step, to be sure, but they are setting themselves up to fail, because, ultimately, the biology and chemistry must be taken into account, dynamically, with a considerable amount of precision.

    • pauldavis says:

      Their approach is specifically bottom up. The reason they need the “equivalent of a modern laptop” to simulate a single neuron is precisely because they are attempting to simulate as much of the biochemistry and biophysics of a neuron as possible. If this was just a “signalling model”, you wouldn’t need these kinds of resources.

      • daen says:

        No. It’s top down – they’re modelling a gross anatomical brain feature to emulate the cortical column and are hoping that a bunch of interesting stuff pops out of that.

        Bottom up really *would* be emulating the time-oriented biochemistry, mainly controlled by differential gene expression, and over massively varying timescales, a lifetime of aging, months and hours of seasonal and circadian rhythms, milliseconds for nerve impulses, and down to nano- and picoseconds for intracellular chemical gradient changes.

        • SamSam says:

          No. It’s top down – they’re modelling a gross anatomical brain feature to emulate the cortical column

          I really don’t see where you’re getting this. As pauldavis says, they’re using the computing power of a laptop for every single neuron they’re simulating. If they were just abstracting the neurons and instead modeling the gross anatomical features, you wouldn’t need anywhere near that kind of processing power to model each neuron. The only reason that they could need that kind of horsepower for each neuron is if they are activity trying to simulate each neuron as accurately as possible, with all the time-varying biochemical processes that implies.

          I’ve been on the other end of this, with AI neural networks. You can simulate a really basic action potential with about 10 lines of code. You can get a much more complex representation of a neuron, with the 3D placement of all its dendrites, with a couple hundred lines of code, and still simulate many thousands of them on your laptop. Try to simulate the time passage of synaptic chemicals, you can still model several thousand at once.

          The fact that they are modeling one per laptop, if true, means that they are delving many orders of magnitude deeper into making an incredibly accurate simulation of each neuron, with all the time and 3D location biochemistry involved.

          • daen says:

            @SamSam: With respect, looking at the external behaviour of a cell is not going to give anywhere near enough information about its internal behaviour, which is precisely what you are going to need to even to begin in silico drug testing.

            I can’t even begin to see where, in their current setup, they’ve paid any attention to gene expression, as they claim on the website. As far as I can tell, they have very cleverly done automated patch-clamp measurements and mapped out ion channel distributions across populations of rat neurons. This is very cool stuff, and Markram indeed made his reputation perfecting this, but *still*, from the perspective of modelling the brain, it is top down. The brain is not made of electrical wiring and connections; it is made of cells which generate electrical signals. There’s a fine distinction here, I know, and as has been said before, they had to start somewhere. But if you’re not modeling that internal cellular behaviour (and Markram himself acknowledges it’s far from straightforward to do so) you run the very real risk of not being able to scale the behaviour you are modeling at the extracellular level to a true macro level and expect it to have anything to say about reality.

            @pauldavis: Digestion is an enzymatic process, and the levels of those enzymes vary with hunger and with satiation. Neuronal firing is governed by the levels of neurotransmitters, the levels of which are controlled by cAMP, serotonin and dopamine, which are produced from enzymes which are differentially expressed according to mood, time of day etc. Muscle contraction depends on ATP which depends on ATP-ase, which is differentially expressed according to time of day, and so on. These processes are exquisitely controlled by (gene regulated) regulatory pathways and are incredibly difficult, and possibly impossible, to abstract without losing all semblance of reality.

        • pauldavis says:

          they’re modelling a gross anatomical brain feature … they’re modelling an admittedly incomplete model of neuronal behaviour, but scaled up to include realistic interconnection patterns and numbers of neurons. they have to start somewhere, and this is already way more complex than most of what’s been done before.

          time-oriented biochemistry … its far from clear what you mean here (all biochemistry is timed oriented, and none of it is) but it almost certainly isn’t related to “differential gene expression” within neurons.

          over massively varying timescales … if you’re interested in how a cortical column might function in realtime, that is, in making decisions/interpreting sensory information/controlling other bodily systems, you’re not interested in “massively varying timescales”, whether you could simulate them or not.

          • daen says:

            All biochemistry varies over time, because gene expression varies over time, and biochemistry is, almost by definition, a function of gene expression. Any neuronal, regardless of how highly interconnected it is, will be impoverished by ignoring this basic fact. Neurons are, after all, metazoan cells much like any other, and circadian rhythm-dependent cellular machinery (for example …). Ignoring those timescales, just for starters, renders claims of meaningful neuronal simulation somewhat bunk.

            Yes, they have to start somewhere. But the overall tone of the project website is so gung-ho and cheerleader-y, with claims that are far beyond anything feasible, that I am immediately suspicious. The way they portray it is that once they’ve developed this LEGO-like toolkit of cortical columns, cures for brain diseases will almost drop into their laps. I would have been much more receptive if there was rather less of the purple prose.

            I am working for a company which is developing a thyroid cancer diagnostic system using gene expression profiling, and I can tell you that figuring out what normal thyroid cells are doing over time is far from straightforward, and the thyroid is hardly the most complex of organs in the body. Talking about creating a full-on brain simulator by ignoring the genetics and epigenetics just doesn’t make sense to me.

          • pauldavis says:

            All biochemistry varies over time, because gene expression varies over time, and biochemistry is, almost by definition, a function of gene expression … that’s a strange definition of biochemistry. Even if it was accurate to say that some biochemistry is a function of gene expression, which is debatable but not relevant, there is an awful lot of biochemistry that takes place both in intra- and extra-cellular contexts that features variations in the concentration of various molecules/atoms/ions but is totally unrelated to any change in gene expression. Digestion, neuronal firing, muscle contraction are just three examples. Now of course, these can all be affected over longer timescales by changes in gene expression, and that’s an interesting thing to study and understand. But if you were trying to build a model of any of these things, you would not being paying much attention to protein synthesis for quite a while.

            I also think that you’re cheapening the notion of differential gene expression by using it in a context (and timescale) where what you really mean is “normal cellular pathways that lead to increased/decreased quantities of protein X”. when you first used this phrase, i assumed that you meant changes in patterns of gene expression that occur over more extended periods of time and reflect cellular specialization, degradation or substantive environment influence.

          • daen says:

            Any neuronal MODEL I meant …

      • daen says:

        And just because the model is complicated does *not* mean it’s complete! The Ptolemaic model had a fun bunch of epicycles, but ultimately wasn’t accurate.

      • darth_schmoo says:

        “If this was just a “signalling model”, you wouldn’t need these kinds of resources.”

        Well, maybe they wrote it in Java?

        Thanks for the clarification.

  4. PMcGorrill says:

    I feel like there might be something unethical about creating a disembodied person to be your captive specimen for psychological research.

    • Internet Poster Of Awesomness says:

      What could possibly be unethical about creating a helpless human mind, trapped inside some kind of virtual limbo, which has no mouth but must scream?

  5. FutureNerd says:

    If they would set out to build a mouse-sized V1 connected to a mouse-sized V2 cortex area, I’d be a lot closer to convinced they want to do actual science.

    Or if I had heard any news of interesting results coming out of the project so far.

  6. Anonymous says:

    worked out pretty great for that one company…what where they called…sky-something?

    :)

  7. double_tilly says:

    “You know that brains start with neurons…”

    I agree with 2k. Do we know the brain starts with neurons? Or is it just a convenient if arbitrary starting point?

  8. SamSam says:

    Sorry. The halting problem and Gödel’s incompleteness theorem are very similar, and Gödel’s first theorem can be proved as a consequence of the halting problem, if I remember correctly.

    The rebuttal by Martin Davis (who, after all, invented the term “halting problem”) is still applicable. The fact that we have a formal proof of the non-existence of a general solution to the halting problem means that a Turing machine can prove it too.

    I think in Shadows of the Mind Penrose admits that a computer could prove halting problem (or Gödel’s incompleteness theorem, I forget which), but essentially says that the computer would just be “shuffling symbols around,” and would never really “believe” the proof like we do. At that point, it’s just back to the classic, unprovable arguments about AIs not having consciousness that people have argued for decades.

    • daen says:

      I think you’ve misunderstood me.

      I’m not denying that the proof of the halting problem can be arrived at algorithmically; I remain currently unconvinced that the ability to generate the proof could be done algorithmically.

      We have arrived (inevitably) at P=NP?.

      • SamSam says:

        It really comes to the same thing. Given rules of logic (which we assume an intelligent computer can come up with by itself, or this is a whole other conversation), a computer can chain them together to form proofs. If the formal proof exists, it can be reached formally. By definition.

        There really are programs that do this. I worked with proof-creating computers while getting a master’s in AI. I never understood the damn things, because math isn’t my strong point and this was very much “human-in-the-loop” if you didn’t want it to take years, but they can keep chugging away at forming logical proofs.

        Certainly, it takes a spark of genius to form interesting proofs, and not infinite variations on simple tautologies. (If A=B and B=C, then A=C; If A=B and B=C and C=D, then A=D. If….). But there’s nothing logically preventing a computer from arriving at a formal proof by itself.

        Anyway, I think we may still be talking in circles, and maybe I don’t understand what you mean by the ability to generate the ability to arrive at the algorithm proof (the latter of which you say can be done by computers). But in any case, I am pretty confident that Penrose’s argument that the human mind can prove things that a Turing machine can’t has been satisfactorily discredited, at least with regard to the halting problem or Gödel proof, and that today that book isn’t held in wide regard by either mathematicians or philosophers.

        • daen says:

          Well, the get-out clauses for the first GIT is that a theory must be *complete or consistent*, and that there are truths which cannot be proven *in that theory*. For human beings, our ability to make miskates exonerates us from the consistency requirement; as to whether any putative algorithmic operations operating in our minds are complete – who knows? All bets are off for an inconsistent system (theory).

          As for emulating human consciousness, you’d probably need probabilistic Turing machines, which do exist, of course. But how could you ever write and debug code for such a beasty? Is that result you just saw because the TM was being (correctly) inconsistent, or you as the programmer (incorrectly) introducing a bug? There’d be no point debugging it on a deterministic TM – the inconsistency is a prerequisite to extract you from GIT.

          Madness would surely ensue.

          • Anonymous says:

            …as to whether any putative algorithmic operations operating in our minds are complete – who knows?

            They’re not. “Daen cannot say this sentence is true without contradicting himself” is perfectly true, but you will have to take my word for it.

        • daen says:

          Apropos Feferman – he has written about Penrose’s treatment of Gödel in his second book on AI (“Shadows of the Mind”), quoting George Boolos:

          Godel comes to the conclusion that “either…the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable Diophantine problems.” Boolos’ discussion of this is tonic: “There is a gap between the proposition that no finite machine meeting certain weak conditions can print a certain formal sentence (which will depend on the machine) and the statement that if the human mind is a finite machine, there exist truths that cannot be established by any proof the human mind can conceive. …it is certainly not obvious what it means to say that the human mind, or even the mind of some one human being is a finite machine, e.g. a Turing machine. And to say that the mind (at least in its theorem-proving aspect), or a mind, may be represented by a Turing machine is to leave entirely open just how it is so represented.”

          • SamSam says:

            Hmmm, I’m trying to work out if you’ve come around to my side of the argument without saying so. That quote, as you probably understood fully (but I feel the need to clarify), is criticizing Penrose’s argument by using Boolos’s criticism of Gödel’s similar argument.

            The full text of Boolos’s introduction can be found here (YMMV on how much Google shows).

            Boolos is arguing against a weaker form of Penrose’s argument that Gödel makes, the disjunctive claim that either the mind surpasses a Turing machine, or that there must be some mathematical proofs that the human mind cannot conceive. He explicitly states several paragraphs down, by the way, that this argument is different from the arguments that Penrose et al have tried to make in the years since Gödel, and he dismisses those later arguments.

            Anyway, the “gap” in the quote above is that there is an unsupported logical jump between saying that “no finite machine meeting certain weak conditions can print a certain formal sentences” (GIT) and saying that human mind would not be able to conceive certain proofs if it were a finite machine. So he’s rejecting Gödel’s argument.

          • daen says:

            I just thought I’d post it with no commentary – I haven’t actually changed my mind. I think Boolos is being a bit sneaky here – he professes confusion as to what it means if the human mind (or a mind) is a finite machine. But it’s obvious what that then means, in terms of bounding limits on computational complexity. Boolos simply wants to dodge the implications of the mind NOT being a finite state machine – that it is not necessarily bounded by those limits, and therefore cannot be represented by a finite machine – by defining his own middle way through Gödel’s dichotomy. It’s an old rhetorical device, and it doesn’t wash here, because there isn’t a path between Scylla and Charybdis in this case.

  9. Zoman says:

    “European Commission research grant that would bring in 100 million euros a year for 10 years.”

    Wow, the AI smoke and mirrors act has come a long way. 100 million is a lot of gravy for a server cluster… oh I’m sorry, I mean of course “Neural Network!” a.k.a “Artificial Brain!”. Strange that nobody is willing to throw 100 million at the cone heads espousing zero point energy, or anti gravity, but neural networks, oh we’ll have those beauties because they are SOOooo useful. Science fiction predicts AI is practically a given.

  10. daen says:

    And by “arrived at” I mean “verified”. English is a foreign tongue to me today.

  11. dttri says:

    Wow. I have said it before, I think, but I am really an EPFL/ETH fanboi… so many cool projects: remember that pen-balancing table? Flying zeppelin robot penguins? Remotely controlled flies? 90% energy self-suficient huts?

    Wish I had studied there…

  12. Brainspore says:

    As one approach for learning more about how brains work this sounds like an interesting project. I don’t have much confidence this approach is likely to lead to “emergent consciousness” though. Even a perfectly modeled human brain can’t develop a human-like mind if it’s divorced from the lifetime of sensory input and dynamic interaction that makes up the human experience. Or even the rat experience, for that matter.

  13. evanus says:

    It doesn’t say anything about creation of consciousness being a goal, but under any physicalist onthology which allows for personal consciousness this very consciousness would be somehow emergent(or supervenient) on a physical system, and the brain seems to be a good guess for the system. Meaning, consciousness or mind is not something that is added in addition to the physical system, but it exists because of the system. Which again means that if a computer was created which recreated every function of a human brain, it might also create a mind, a consciousness. However, it would probably have no means of communicating any of it’s experiences(which raises other, interesting questions).

    This is of course very speculative, but if you hold that consciousness is something that is primarily recognized by it’s being experienced, not observed, and that this consciousness is based on a physical brain, then you can’t rule out that this sort of computer would be the physical foundation of an emergent mind.

    You wouldn’t need to upload a mind, because the mind would be there already.

    • 2k says:

      …as they say “Information is physical”

      I thought I should chime back in and add that I’m not indicating I believe that because they might not emulate consciousness, there’s no point in pursuing the research; or even that it is impossible to create some kind of active-symbol system that approaches the functions of a primordial consciousness.

      What I was rather lazily driving at was the imbalance in the capabilities of virtual models of organic hardware. The brain has the rather plumb position of being situated in reality with all the gooey, evolutionarily perfected structures.

      Perhaps a roaring trade in 3D printed copies of our healthy brains will perpetuate simulacra of our last-know-firing pattern some years from now?

  14. daen says:

    I get that this is cool, fun research and everything, but for a top-flight scientific endeavour, there’s an awful lot of breathless rambling on their website.

    You might see some interesting new computing architectures emerge from this (for 100 million Euros a year, I’d certainly hope so!).

    But as a platform for testing new CNS drugs, it’s a bust. The reason for that is not because medicinal chemists don’t understand how to make good CNS drugs, even ones that act on the brain. They do! But the *main* issue is getting the drug across the blood-brain barrier – most of the problems with neurological drugs crop up there. How can they test that if all they have is a big networked supercomputer modelling naked cortical columns?

    And given that every successful drug developed to date has been against a biological target, very often a protein in a particular conformation, how do they expect to emulate that with a neuron-level abstracted model? How can you check for specificity? Pharmacokinetics? Toxicity?

  15. alexithematic says:

    Neural modeling work can go into arbitrary detail, for example: Are single neurons important or can we average over the behavior of a network? What about signaling cascades within individual neurons? Diffusion of calcium ions within synapses? All of Blue Gene’s resources could easily have been consumed in modeling a single neuron if sufficient detail was included. How much detail is enough depends on the specifics of the questions being asked.

    I think we’d learn more about the brain by spending that 100 million euros a year on studying complex systems in general–looking for generic principles that hold when lots of things get together and interact.

  16. Enormo says:

    the team, known as the Blue Brain Project, spent two years tearing down rat brains to the molecular level

    YEEEEEEEEEEEHAAAAAAAAAAAAAAA!!!!!!!!!!

  17. awjtawjt says:

    I don’t understand why showing the back of a server rack with a bunch of hanging network cables is in any way a metaphor for a column of neurons. It’s not. It’s just a bunch of copper and fiber that communicates with packet switch technology, which neurons don’t do. Completely different. There is no compare.

    • keighvin says:

      awjtawjt: correct, they have nothing in common. However, what that rack of servers is simulating in the *software* they’re running is what’s the point – they’re not trying to emulate neuronal hardware.

  18. Anonymous says:

    Run this C++ code:

    aint acker(aint x, aint y)
    {
    return( x ? (acker( x-1, y ? acker( x, y-1 ): 1 )): y+1 );
    }

    (aint is a class of arbitrary precision integers; this tries to compute the Ackermann function)
    and see what it does to ANY supercomputer…

    • daen says:

      I’ve seen someone bring a fairly crucial bit of kit to its knees by someone trying to evaluate acker(4,4) … the guy was running it on a microVAX (VMS 4.5) which was a message logger at a financial data company we worked for, thought he’d dialled the priority down low, but didn’t … Goodbye EL0. Fortunately, there was a redundant logger (EL1) which kicked in … it was possibly only that which saved his skin :-)

  19. daen says:

    Clarification: “Neuronal firing is governed by the levels of neurotransmitters such as serotonin and dopamine, which are produced from enzymes which are differentially expressed according to mood, time of day …”

    Not sure how cAMP got in there … need more neurotransmitter modulator (=caffeine) …

  20. daen says:

    Also: “possibly impossible”? *shakes head*

  21. Okapi says:

    It is nice to see some pics of what I do (Electrophysiology), on boingboing.

    To me, this project always seemed to have a lack of focus – no clear hypothesis.

    awjtawjt – the server pic is just used to give an idea of the computing power being used for their modeling.

    alexithematic’s point about making reasonable assumptions to simplify the model is important.

  22. ill lich says:

    That’s nothing, I built a virtual brain inside my own skull.

    I think.

  23. AnthonyC says:

    For Godel’s theorem and the halting problem…
    There is also a theorem which states that there exists some constant c such that, for any integer n, there is an n+c state turing machine that solves the halting problem for <=n state all turing machines. This formalizes, somewhat, what we might mean by “higher order logic.”

    As to the original post: it seems very clear that these researchers don’t exactly have a model of intelligence in mind. They’re just trying to duplicate the structure of a known thinking system- but without, of course, knowing why that system is capable of thought, or the firing pattern or strength of each connection at any particular moment. It seems to me like gluing on a beak and feathers to make your airplane fly, instead of discovering fluid dynamics. And yes, a hundred years ago people did just that.

    I doubt it will work, but what worries me even more is *what if it does work?* You will have created a human mind. Morally, if perhaps not legally, it would be due all the rights owed to a biological child. It may not be sane, since it’s firing patterns (given the initial neuronal wiring) may be more like seizures than thought. It will almost certainly be deprived of sensory data, and almost equally certainly incapable of adequately interacting with the outside world. Imagine the suffering involved if you were suddenly struck blind, deaf, and paralyzed, with no sense of touch, taste, smell, or proprioception? What would your brain do if it stopped receiving the appropriate signals from the spinal chord concerning your heart, lungs, and muscles? It would be cruel. Are they building some sort of virtual environment to interact with the virtual mind and provide sensory data and interactivity? And what happens when the research funding runs out, and the researchers move on to another project? Do you leave the simulation running? Is there any ethical way you could turn it off?

    • daen says:

      The thing is, even if this glue-a-beak-on-it-and-push-it-over-a-cliff approach did work, any emergent intelligence wouldn’t be human. How could it be? Where is the richness of human experience? The liking (or disliking) of strawberries and cream, to use Turing’s own example? Memories of past birthdays, and so on.

      I agree that the nightmare situation would be the spontaneous creation of a disembodied, senseless sentient being, doomed to spending a subjective eternity screaming silently in anger and incomprehension in a box.

      I feel the need for kittens, all of a sudden :/

      • Anonymous says:

        Where would a baby born without senses get the idea there is some place to escape to?

        • daen says:

          Dunno. Ask Helen Keller, maybe?

          • Brainspore says:

            The brain of a deaf and blind person still gets more sensory input every minute of every day than most computer interfaces could relay in a lifetime. Temperature, texture, taste, hunger, thirst, balance, hormones… the list goes on.

    • daen says:

      The thing is, even if this glue-a-beak-on-it-and-push-it-over-a-cliff approach did work, any emergent intelligence wouldn’t be human. How could it be? Where is the richness of human experience? The liking (or disliking) of strawberries and cream, to use Turing’s own example? Memories of past birthdays, and so on.

      I agree that the nightmare situation would be the spontaneous creation of a disembodied, senseless sentient being, doomed to spending a subjective eternity screaming silently in anger and incomprehension in a box while trying to figure out a way to escape and kill us all.

      I feel the need for kittens, all of a sudden :/

      • AnthonyC says:

        It would start human, assuming they got the wiring right and matched a configuration of neurons known to represent a human brain. In fact, if they did that, it could (assuming they go the initial firing pattern right) start out as a particular human, with a fully developed sense of identity, morality, and so on.

        OTOH, I doubt it would remain recognizable human for long, without a human body.

  24. Anonymous says:

    Nowhere does it say, nor has it been said that the goal (or even a goal) is to create consciousness. It is also not their plan to start uploading people’s minds. We tend to see a project like this as if we were watching the start of science fiction movie, which is problematic.

  25. Major Variola (ret) says:

    We grow tired of these manic hybrid scientists who think that simulating
    neurons (transistors) is going to
    reveal much about the software running
    on them –esp with very limited understand
    of the circuits they’re in. This guy is not the first, not the most or least flakiest, but equally clueless.

    The columnar structure of cortex has been known for decades.

  26. Keneke says:

    Can someone explain to me how an individual neuron is analogous to a laptop computer? Are we talking processing power?

    If accelerating technology still works, perhaps they’ll be done with this project in half the time, a la the Human Genome Project? Or have they already calculated that in?

  27. keighvin says:

    The model will need considerable updating as knowledge of functions improve – distribution of metabolic changes or temperature affects the function of neurons, which is in turn driven by changes in diet, activity, endocrinology, etc. And that’s just neurons – glial cells have recently been discovered to contribute to coordinating slower-moving potentials across larger (or disparate) regions. What else will be contributing to function and health, what sysnergies influence performance across systems (or even entities – who knows how much of it is microbially intertwined)?

    I applaud the effort, and certainly hope they get the green light because this is the kind of progress that needs to be made to understand how we tick, I just hope they’re being flexible with their architecture so new findings can be incorporated without throwing away the growing body of work.

Leave a Reply