Genealogizing Cognitive Science

While preparing to write a chapter on the cognitive science of religion, I thought it would be a good idea to investigate the foundations of cognitive science before getting to the “religion” offshoot of it. My main concern was that the words “cognitive” and “science” cast a talismanic spell: when they ritually appear together, it is easy to assume that what follows is authoritative and firmly grounded in theory, method, and data. One of the best ways to conduct such an investigation, and test assumptions about authority, is to read histories of the field. Intellectual histories, which might also be called genealogies, examine the origins of an idea, or discipline, and trace its development over time. The best genealogies expose assumptions, examine conflicts, and raise doubts. They can be corrosive, undermine faith, and disrupt myths. Though its name may suggest otherwise, cognitive science is not without its fair share of faith and myth.

My purpose here is not to examine these in any detail, but to point interested readers to sources which may prompt healthy skepticism. A good place to start is with Howard Gardner’s The Mind’s New Science: A History of the Cognitive Revolution. Though it is a bit dated, having been published in 1985, it more than adequately covers the deep origins of cognitivism, in Cartesian-Kantian philosophy, and more recent origins in the 1950s with Chomsky’s revolt against behaviorism. It also covers the early debates and subsequent development of artificial intelligence or “AI,” which was originally wedded to cognitivism but has since gone mostly in separate algorithmic and engineering ways.

For the truly intrepid, I recommend Margaret Boden’s two-volume magnum opus, Mind as Machine: A History of Cognitive Science (2006). Though it is oddly organized and at times idiosyncratic, it covers just about everything. Because the chapters are weirdly named and the index rather sparse, finding precious bits within its 1,708 pages can be daunting. Fortunately, an internet search will lead you to a virtual copy of both volumes, which you can then search with Adobe’s tool for key words, names, or phrases.

Because Gardner and Boden are committed and practicing cognitivists, it may seem strange that their histories engender skepticism. Yet ironically they do. While the cognitivist enterprise identifies as science, situates itself within science, and uses scientific methods, these alone do not secure its status, or authority, as science in the manner of physics, chemistry, or even “messy” biology. The mind, in many discouraging ways, remains a mysterious black box.

While reading conflicting cognitivist accounts of the way the mind supposedly works — “mechanically” and “computationally” — nagging concerns arise about whether these literate-symbolic representations of inner-mental representations are scientific metaphors or descriptive analogues. Metaphors do not become scientific simply, or complicatedly, because we can model, mathematize, and chart them. There are also nagging concerns about whether tests of these models are investigating anything other than the symbols, or terms, which these models presuppose. It is hard to find satisfying or foundational empirical proof in this complex conceptual pudding. Of course many cognitivists eschew such proof because it muddles the models.

So just how does the mind work? Steven Pinker, a true cognitivist believer, thinks he knows, so I re-read his popular classic, How the Mind Works (1997). While skimming over the just-so evolutionary stories he is so fond of telling, I focused on his modularity theses and computational arguments. I could not help but think that minds might work the way he claims, or they might not. We cannot get inside heads to observe the logically elegant unfolding and symbolically impressive inferencing he describes. There is no direct data. We can see all sorts of behavioral outputs, but describing these with plausible models is not the same as explaining them with definitive proofs.

Like most cognitivists, Pinker has been greatly influenced by Noam Chomsky’s work in linguistics and Jerry Fodor’s early work on modularity. These were plausible models, in their day, but Chomsky’s has undergone so many major revisions that no one is really quite sure where he stands, and Fodor has rejected the massive modularity extension of his original proposals. This leaves Pinker, and his version of cognitivism, on rather shaky ground. It also led to Fodor’s rebuke in The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology (2001). Others, such as Kim Sterelny, have critiqued the massively modular-evolutionary model and offered alternative accounts. In Thought in a Hostile World: The Evolution of Human Cognition (2003), Sterelny states his particular case. Like most models, it is plausible though not compelling and certainly not definitive. None of the cognitive models command our acquiescence or obedience by virtue of scientific authority.

Where does this small sampling of sources leave us? Regardless of who is more right or less wrong, the fact that these and many other arguments exist – among the most accomplished scholars in cognitive science – tells us something important about the status of the field. The foundations are far from being settled. This also tells us something important, cautionary to be sure, about the cognitive science of religion.

A Boy Entering A Circuit Board Head

Did you like this? Share it:

7 thoughts on “Genealogizing Cognitive Science

  1. Steve Lawrence

    This is completely off-point. I write here only because I am ignorant of the right way. Beg indulgence: A recent article in Atlantic sheds light, indirectly by implication, on the function of religion. From big data sets it has been deduced that happiness (felt well-being) craters in mid life. No doubt this is why “mid-life crisis” is famous. After cratering, happiness begins to rise again perhaps about age 50. This phenomenon is noted across cultures, and even among non-human primates (happiness per their keepers). So religion, what we know as organized religion, would logically arise to if not prevent the mid-life dissatisfaction that is a natural part of the primate condition (especially among those not starving, among the well-off) at least to guide the mid-life person through it in a way less likely to result in precipitous action. Our genes (and they don’t care a hoot for our happiness) prefer that we become dissatisfied, and in response go out and have an affair. Diversifies genetic portfolio. But human society dislikes this. It is disruptive. Happiness is said to be determined by four horsemen: faith, family, community, and work. Religion encourages the first three of them. Typically those approaching middle age have children–perhaps at that rebellious teen age–who not only stress the adults’ lives, but also may benefit from the strictures of faith and religion. The routine and ritual of religion thus ferries the family over troubled waters.

  2. Cris Post author

    I read that article and it never occurred to me that it shed any light on contemporary religions. While modern religions, or those which have developed over the past 3,000 years, may address the U-curve discussed in the article, I don’t see that those findings bear on evolutionary religious studies, or cognitive science. But let me think about it and get back to you.

  3. franscouwenbergh

    Dear, dear Cris,
    I’m so glad seeing you desperate.
    You, one of those gifted students.
    Which I never was, on school and university. My gifted friends scored high marks, easily.
    I never, but I drew their portraits in the breaks. During the lessons I was thinking about how humans have become humans, knowing that they are not conjured on earth.
    In the 90ths , being a successful portraitist years after years and a grandfather, I started with my project of the constructing of an science-based alternative for the vanishing belief in the monotheist Adam-and-Eve-story in the free world.
    I started to look at what humanism was developing in this project. Nothing. Their thinkers were academic philosophers! Aarch, everybody knows that academic philosophers know nothing about humanity beyond what ancient philosophers have written about it. But those old philosophers weren’t disposing of our modern science.
    I read what modern linguists thought about this. Aaarch! All gifted students stuff! Chomsky. Fodor . Steven Mithen The Prehistory of the Mind (1996), Dan Dennett Consciousness explained (1991), and so on and on. Everything was hard to read and nothing was really explaining.
    We, humans, started as apes. How come? In my experience nothing in our human story is so complicated and incomprehensible that you cannot tell on your 11-year grandson.
    In my own reconstruction it was all clear: it started with names for the things. Because if something is possible, it will be happen, sooner or later.
    Disposing of names for the things does something with an animal.
    It creates a feeling of distance between the ‘namer’ and ‘the named thing’ (between ‘subject’ and ‘object’). They came to be emotionally detached from the world around them. Let me think about a ‘name’ for them. This apemen-population became our earliest ancestors. They looked like bipedal bonobos. So ANBOS.
    – It enabled the ANBOS to confer knowledge from one generation to the next. Knowledge could be piles up. – It enabled the ANBOS to pile up everybody’s inventiveness together. Two know more than one, and consulting as whole group the ANBOS could brainstorm and find solutions for problems. The ANBOS could make plans.
    – Disposing of ‘names’ for the ‘things’ creates a feeling of power over the ‘named things’.

    Let us assume that this developing of ‘names’ for the ‘things’ started 4 mya, It led the ANBOS to start with using the fire. This meant a big jump in the developing of the (culturally) evolving into linguistic beings. Because from now on they could spend their nights on the ground instead of making individual nests high in the canopy. Hours and hours were added on their communication. Nightly hours, only lending themselves for communication. What did the ANBOS communicate during this hours for rest? Processing of emotional events. In performances (Steven Mithen’s idea, in his After the Ice(2003), and his Singing Neandertals (2005)). Not that Mithen has this simple theory of ‘names for the things’, oh no, Steven is a gifted student. Too clever to think about simple things, I think.

    The ANBOS communicated with gestured ‘names’. Like our deaf. But The ANBOS weren’t deaf, and the Neandertals either, they only had no control over their voices, like all animals. Their voices were only accompanying their gestural language. But they had neurological control over consonants such as kh’s and ph’s and ff’s. So the singing Neandertal language was not at all silent. Early AMHs (Anatomically Modern Humans) such as the San people still have clicks in their language.

    This simple theory explains the origin of religion also easily.
    Ever more ‘names’ for ever more ‘things’: it creates a chaos in one’s mind if ones has no coherence in it. This coherence provides the story: of how things started and developed, including ourselves, onto they are now. The creation story. What the Neandertals sung/danced was the creation story of their tribe world. Every night. Around the camp fire that kept the predators away.

    I told you about my simple theory two years ago. As a gifted student, had no patience for a simple theory. But now you are desperate! Maybe you will think about such a uncomplicated ‘names for the things’-theory as the ‘invention’ that made us a symbolic species (the 1997 book of Terence Deacon, but again a gifted student, so hard to read and not satisfactory: not simple enough).

  4. Cris Post author

    Frans, your simplistic “see spot run” claim about the origins of religion (and humanity in general), distilled from your reading of popular books, reminds me of HL Mencken’s observation: “For every complex problem there is an answer that is clear, simple, and wrong.” I don’t mean to be rude (or come across as “desperate”) but that’s what I think about your simple theory. Your flogging of the same old popular and simple horse is getting old. If you don’t have anything else or original to say, I’ll just start deleting these comments.

  5. Dominik Lukes

    I like that you’re venturing into my neck of the academic woods, Cris. But I have to say that your readings are not giving you the full picture of the cognitive revolution.

    That’s not to say that Boden is not immensely valuable or accurate. But her account is really focused on one very distinct strand of the cognitive revolution which is beholden to the computational metaphor of the mind. But that is not the only way in which cognition has been studied, only the one that has gotten the most attention. And unfortunately it is the strand that has least to offer to any student of religion.

    I would definitely start with Jerome Bruner’s ‘Acts of Meaning’ which is an alternative early history of the cognitive revolution and lays out a completely different agenda for it. Boden talks a lot about Bruner in the early days but I’m not sure she mentions him much from later on.

    Both Gardner and Boden give a relatively short shrift to what I would consider the fundamental innovation of cognitive science, namely the study of how humans organize the world with their minds through things like frames, models, scripts, schemas, etc. (see here for a quick overview of why I think this is so important: There’s some mention of prototype theory but not enough.

    On this, I cannot but recommend Lakoff’s ‘Women, Fire, and Dangerous Things’ (a book I liked enough to translate into Czech). The first part is called ‘The Mind Beyond The Machine’ and gives a good overview of the field as well as setting out an agenda for a non-computational rigorous study of the mind. While Lakoff’s critique is not always bullet-proof, it is an important corrective. He also addresses the modularity question. A more recent overview of that is provided by Vyv Evans’ ‘The Language Myth: Why Language Is Not an Instinct’. A more programmatic book is ‘The Way we Think’ by Turner and Fauconnier and a more wide reaching is Tomasello’s ‘The Cultural Origins of Human Cognition’.

    There really isn’t a single up-to-date overview of this side of the field in the style of Boden or Gardner but I think the primary sources repay attention. I only listed the ones that are more broadly cognitivist and not those that focus on language only although that’s where much of the thinking in this vein was forged.

  6. Chris Kavanagh

    Interesting post Cris, I always enjoy an appropriately skeptical response and I think you have a point about over essentialising cognitive metaphors. However, when I think about the ‘cognitive’ part of CSR, I have to say that I don’t immediately see it as being wedded to a specific cognitive theory. Rather, I take it as indicative that the relevance of cognition and the associated constraints will be given due consideration. Obviously with different theorists, different cognitive models are more or less endorsed but again, I don’t see this as being a problem, instead it just reflects that our knowledge about the brain and cognition is still developing and open to revision. I don’t think that we are in the same situation as we were say 20 years ago though, I mean even just considering our models of how memories work, there are fundamental gaps and still areas of debate, but we also know a lot more too and there are areas of consensus. A discipline that in title seeks to give recognition to the importance of cognition in the construction of aspects of human cultures seems to be on the right track to me.

    Oh and Dominik thanks for the nice recommendations/summaries of overviews! Very helpful!

Leave a Reply