Cognitive Maps & Brain Territories

Apropos to my last post on the status of cognitive science, or state of an emerging art, two recent articles address the issue from different disciplinary perspectives. The first, by psychologist Gary Marcus, biophysicist Adam Marblestone, and neuroscientist Jeremy Freeman, discusses the problems surrounding big-money and big-data brain mapping projects that are being touted as the next big thing in science. While the authors laud these projects, they are cautious about results:

But once we have all the data we can envision, there is still a major problem: How do we interpret it? A mere catalog of data is not the same as an understanding of how and why a system works.

When we do know that some set of neurons is typically involved in some task, we can’t safely conclude that those neurons are either necessary or sufficient; the brain often has many routes to solving any one problem. The fairy tales about brain localization (in which individual chunks of brain tissue correspond directly to abstract functions like language and vision) that are taught in freshman psychology fail to capture how dynamic the actual brain is in action.

One lesson is that neural data can’t be analyzed in a vacuum. Experimentalists need to work closely with data analysts and theorists to understand what can and should be asked, and how to ask it. A second lesson is that delineating the biological basis of behavior will require a rich understanding of behavior itself. A third is that understanding the nervous system cannot be achieved by a mere catalog of correlations. Big data alone aren’t enough.

Across all of these challenges, the important missing ingredient is theory. Science is about formulating and testing hypotheses, but nobody yet has a plausible, fully articulated hypothesis about how most brain functions occur, or how the interplay of those functions yields our minds and personalities.

Theory can, of course, take many forms. To a theoretical physicist, theory might look like elegant mathematical equations that quantitatively predict the behavior of a system. To a computer scientist, theory might mean the construction of classes of algorithms that behave in ways similar to how the brain processes information. Cognitive scientists have theories of the brain that are formulated in other ways, such as the ACT-R framework invented by the cognitive scientist John Anderson, in which cognition is modeled as a series of “production rules” that use our memories to generate our physical and mental actions.

The challenge for neuroscience is to try to square high-level theories of behavior and cognition with the detailed biology and biophysics of the brain.

This challenge is so significant, and difficult, that many cognitive scientists have bracketed it, or set it aside, as too complex. For tractability reasons, they construct cognitive models — and test them — without any reference to the actual brain. While this may be acceptable for a science in its relative infancy, it constitutes a bridging problem that cannot forever be ignored, or simplistically set aside as insoluble. Because neuroscientists are making impressive advances and approaching cognitive science from a biological direction, the two disciplines shall eventually meet. On whose terms, or on what theories, is yet to be decided.

In the second, computer scientist Jaron Lanier discusses the myth of artificial intelligence and “religion” built around the speculative hypothesis, or fear, of singularity. Ironically, the tech futurists who get all mystical about these issues are, in other aspects of their lives, devoted to applied technology that works and which of course makes money. Lanier, mindful of the fact that AI and cognitive science are cognate disciplines which, for all their impressive achievements, are not close to creating sentient machines or explaining human minds, is skeptical:

There’s a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don’t know how most kinds of thoughts are represented in the brain. We’re starting to understand a little bit about some narrow things. That doesn’t mean we never will, but we have to be honest about what we understand in the present.

This is something I’ve called, in the past, “premature mystery reduction,” and it’s a reflection of poor scientific mental discipline. You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you’re a lesser scientist. I don’t see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things. A great example is the Human Brain Project in Europe, which is a lot of public money going into science that’s very influenced by this point of view, and it has upset some in the neuroscience community for precisely the reason I described.

There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.

This is, or should be, good news. It’s good not because Elon Musk is probably wrong about the existential threats posed by AI, but because we acknowledge ignorance and ask the right kinds of questions. Answers will come, in due course, but our measures should be in decades, if not centuries. In the meantime, we should continuously remind ourselves that maps are not territories.


Did you like this? Share it:

5 thoughts on “Cognitive Maps & Brain Territories

  1. Dominik Lukes

    You say: “For tractability reasons, [cognitive scientists] construct cognitive models — and test them — without any reference to the actual brain. While this may be acceptable for a science in its infancy, this fundamental bridging problem cannot long be ignored, or simplistically set aside as insoluble.”

    But I don’t see why it should be at all necessary for cognitive science to refer to the brain in most if not all of its research. Behavioral biologists almost never refer to the chemistry that makes the organisms they study tick. Chemists do not as a rule take into account quantum phenomena. Etc. They acknowledge the substrate (in as much as a hierarchy can be establish) but do not reduce their subject of study to those phenomena.

    It is not at all a given that most or any cognitive-level phenomena have direct neural correlates in the same way that genes do not directly determine most of the behavioral features of humans. That’s not to deny that the brain is the organ that makes cognition happen, only that the brain is the right place to look for answers. Or that ignoring the brain is in any way a sign of immaturity of the field of cognitive science. If anything, it is neuroscience that is ‘in infancy’ with huge interpretation and even replication problems. Here’s my take on it at some length:

  2. Joe Miller

    What do you think of Stanislas Dehaene’s Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts, Cris? I’d also like to know how you’ve received it, Dominik.

  3. Dominik Lukes

    Frankly, this is not the sort of thing I have a lot of time for. The very idea of some sort of homological relationship between thought and the brain is incredibly dubious, so anything starting with that premise is not something for which I’ll spare more than cursory thought.

  4. Joe Miller

    I wouldn’t write it off just yet – Dehaene is one of the best in the field. It’s well worth reading, if only for the overview of contemporary research on how subconscious processes factor into decision making.

  5. Chris Tolworthy

    I don’t think the existential threat comes from AI being like us in any way. The existential threat is the one hinted at in the book Sapiens: the arrival of agriculture was great for absolute human numbers, but terrible for individual humans. AI has the same effect as agriculture: it allows societies as a whole to grow stronger, by turning the members of that society into cogs in the machine.

Leave a Reply