Category Archives: Philosophy

Plains, Rocks & Cosmos

In anticipation of a summer touring the Great Plains, I took some time off from the blog to immerse myself in a surprisingly rich literature on the subject, which of course has nothing to do with religion. I will say, however, that anyone who has yet to discover this richness or is thinking about exploring the Plains should consider reading some of the books listed at the end of this post. Having just read each in succession, the immersive effect is pronounced and I’m ready to go but the weather is not yet cooperating. While waiting, and in anticipation of the anthropology of religion course I will be teaching in the middle of the summer, it’s time to round back toward religion.

The good news is that in doing so, I won’t run the risk of being brutally murdered. For the third time this year, a “secular” Bangladeshi blogger has been hacked to death by irate religionists. These three blasphemous bloggers were writing on subjects and topics similar to those that appear here, but were doing so knowing they would be targeted. Talk about courage.

Here in the United States, we fortunately do not have to confront this sort of thing, though we do have young earth Creationists who are relatively harmless. While I have never paid them much mind because arguing with them is futile, a geology professor thinks that the rocks disprove creationism. He apparently does not know that young earth Creationists have considered his argument and flatly rejected it. They are not interested in science and accept it only when it suits their psychological needs or religious purposes. But having said this, I was a bit shocked to encounter the following sentence in the professor’s piece:

“Embracing young Earth creationism means you have to abandon faith in the story told by the rocks themselves.”

This is an unfortunate choice of words. Why should we have faith in a story told by rocks? Rocks don’t tell stories. Geologists provide us with theory and data based narratives about rocks. These “stories” are subject to challenge, revision, and reversal. This method has nothing to do with faith.

From rocks to the cosmos, which is timely for anyone who has recently seen “Interstellar,” a movie with some brilliant science marred by metaphysical speculations about trans-dimensional love tunnels. It was marred even further by Matthew McConaughey’s overwrought acting, but that is another story. The main story here is the science based on Kip Thorne’s work and book, Black Holes and Time Warps: Einstein’s Outrageous Legacy. Though I am only about halfway through and not sure I understand everything, it is great for bending the mind. The cosmos is stranger than fiction and perhaps even myth.

Finally, the cosmos — and cosmological theories — are the subject of this dense piece by Ross Andersen over at Aeon. Cosmology, it appears, is in crisis and may stay that way for quite some time, perhaps forever. While this may unsettle some, I find it invigorating. When it comes to large and perhaps intractable subjects like this, I always find it helpful to read a good history of the field, so thanks to Andersen for recommending Helge Kragh’s Conceptions of Cosmos: From Myths to the Accelerating Universe: A History of Cosmology. It’s next on my list.

And speaking of lists, here is the one I promised at the beginning of this post, for all lovers of the Great Plains:

Great Plains by Ian Frazier
The Great Plains by Walter Prescott Webb
Love Song to the Plains by Mari Sandoz
Prehistoric Man on the Great Plains by Waldo Wedel
The Last Prairie: A Sandhills Journal by Stephen R. Jones
Ogallala Blue: Water and Life on the Great Plains by William Ashworth
Imagining Head-Smashed-In: Aboriginal Buffalo Hunting on the Northern Plains by Jack Brink

Did you like this? Share it:

Slavish Conscience

Over at the London Review of Books, Adam Phillips criticizes self-criticism in an essay that includes this brilliant bit:

We are never as good as we should be; and neither, it seems, are other people. A life without a so-called critical faculty would seem an idiocy: what are we, after all, but our powers of discrimination, our taste, the violence of our preferences? Self-criticism, and the self as critical, are essential to our sense, our picture, of our so-called selves. Nothing makes us more critical – more suspicious or appalled or even mildly amused – than the suggestion that we should drop all this relentless criticism, that we should be less impressed by it and start really loving ourselves. But the self-critical part of ourselves, the part that Freud calls the super-ego, has some striking deficiencies: it is remarkably narrow-minded; it has an unusually impoverished vocabulary; and it is, like all propagandists, relentlessly repetitive. It is cruelly intimidating…and it never brings us any news about ourselves. There are only ever two or three things we endlessly accuse ourselves of, and they are all too familiar; a stuck record, as we say, but in both senses – the super-ego is reiterative. It is the stuck record of the past…and it insists on diminishing us. It is, in short, unimaginative; both about morality, and about ourselves. Were we to meet this figure socially, this accusatory character, this internal critic, this unrelenting fault-finder, we would think there was something wrong with him. He would just be boring and cruel. We might think that something terrible had happened to him, that he was living in the aftermath, in the fallout, of some catastrophe. And we would be right.

Phillips is right: there is something seriously wrong with the homunculi in our heads. With Freud as his theory-master and Hamlet as ego-actor, Phillips engages with conscience, that most intractable and culturally inflected aspect of ourselves. Though Michel Foucault merits no mention in his essay, Phillips is also talking about discipline: that resolve, sometimes steely but always nagging, which seemingly arises from within but which is implanted from without. In near modernity, or in Abrahamic times and places, this conscience or discipline is the voice of God, whose state-serving accoutrements present as morals. In modernity, or in consumer-capitalist times and places, this conscience or discipline is the voice of the Market, whose state-serving accoutrements present as desires. These are the shaming and punishing voices of masters, in which case we are slaves.

— Cris

self-criticism-2-300x236

Did you like this? Share it:

Fractured Evolutionary Narratives

Yesterday Nature published a study of a ~55,000 year old cranium which shows that anatomically modern humans (“AMH”), presumably migrating out of Africa, were in the Levantine corridor at a time when genetic data indicate humans were interbreeding with Neanderthals. Because Neanderthals were also present in the Levant near this time, the cranium may represent the African group that first encountered Neanderthals, mixed with them, and gave rise to the hybrid ancestral or “modern” populations that later colonized Europe and large parts of Asia. This is a classic example of the right kind of fossil being in the right place at the right time. The presence of this AMH population had been hypothesized, based on genetic and other data, but direct evidence had been lacking until now.

So here we have yet another link, just one of many, in the history of hominin evolution. It’s a great story, but not a simple one. In an editorial accompanying this study, Nature cautions against linear-progressive conclusions:

Where does this fossil fit in? Beware simple answers, and, indeed, simple questions. There is a temptation when discussing human evolution to reconstruct it as a narrative, in which successive species evolved to be more like us, and the more like us they became, the more likely they were to migrate to other parts of the world and replace pre-existing forms.

There are at least four things wrong with this. The first is its rather imperialist framing, in which evolution and replacement can be justified after the fact as a kind of manifest destiny.

The second is that it dismisses any extinct species as inferior and therefore of secondary importance.

The third is that it assumes the existence of an arrow of progress, in which species always evolve towards ourselves, a mistaken view that is too welcoming of spurious conceits such as ‘missing links’, and unwilling to countenance odd side branches such as Homo floresiensis, the peculiar, dwarf hominin (member of the human family) that lived in Indonesia until relatively recent times (see nature.com/hobbit10).

The fourth, and arguably the most important, is that it misrepresents the extreme fragmentation of the fossil record, something that Charles Darwin recognized, with his usual percipience, as a ‘difficulty’ with his theory of evolution by natural selection. Darwin was (as usual) selling himself short. That evolution has happened is no longer in doubt: the shared chemistry and structure of all life, from the meanest microbe to the furriest feline, would be testament to that, even had no fossils ever been found.

While I do not agree that the fourth factor is most important (because the hominin fossil record is quite good, with no major gaps or “missing links”), the third factor is worth emphasizing: the story of hominin evolution is not a linear or ineluctable unfolding toward us. There is no arrow of progress in evolution, whether we are talking about hominins or “religion.”

— Cris

stylized-tree-of-human-ancestry

 

Did you like this? Share it:

Renaissance Magic & Science

Few things could seem as far apart as magic and science, though if we consider the history of science, we find that the two were intimately twined. This was particularly true during the Renaissance run-up to the classical founding of science in the persons of Francis Bacon (1561-1626), Rene Descartes (1596-1650), and Isaac Newton (1642-1727). While we might add Copernicus (1473-1543) and Kepler (1571-1630) to this list of founders, I will set them aside for the moment because their status as astronomer-mathematicians is especially pertinent to my later discussion.

It is of course well known that Newton was anything but a pure scientist, at least in the modernist sense of the word: he was steeped in Christian mysticism and believed he was discovering, or uncovering, God’s lawful work in nature. The Principia was, in Newton’s eyes, far more than a founding document of science: it was a tribute to the divine as manifest in matter and mathematics.

Considered in broader historical context, Newton’s mysticism was hardly novel. The Italian Renaissance was inspired in large part by the idea that the universe was a harmonious whole and the heavens emanated continuous influences over all things on earth. These harmonious influences could, moreover, be divined through number and manipulated by math. Those who concerned themselves with such matters were astronomers, astrologists, mystics, and mathematicians, often bound up in the single person of a Magus. Prominent among such persons were Pico della Mirandola (1463-1494) and Giordano Bruno (1548-1600), both Renaissance humanists and magi without peer. Bruno is often remembered as a champion of the Copernican model who burned at the stake after being tried for heresy by the Inquisition. As such, he has become a martyr of science.

While there may be some truth to this, the matter is more complex, just as Bruno was complex. If one takes a Catholic view of such matters, there can be no doubt that Bruno was a theological heretic. He did, after all, declare that Jesus was not God but merely an “unusually skillful magician.” Had Bruno made this pronouncement (and others like them) as a skeptic, we might justly consider him an early scientist. It appears, however, that Bruno is better placed as a late magician, a Neoplatonic mystic steeped in Hermeticism, Kabbalah, and Pantheism. Bruno’s deepest desire was to unlock the mysteries of the universe, and find the true religion, in these traditions. The key, he thought, was number. In Bruno we find a near perfect merger of magic, mysticism, and mathematics: the universe as seamless web and harmonious whole.

It is not hard, on one hand, to see how Bruno’s unorthodox views would have upset Catholic authorities and ultimately led to his fatal-fiery demise. It is not hard, on the other hand, to see how these views are consonant with modern cosmology and mathematics. So where to place or how to figure Bruno? This is the question asked and well answered by Frances Yates in Giordano Bruno and the Hermetic Tradition (1964), a book that has been on my reading list for years but which I only recently bagged. Aside from its inherent interest for Bruno aficionados, it is an important work for the history of science. As part of her inquiry into Bruno, Yates asks why it was that scientific methods, particularly mathematical ones, appeared when they did. She is not satisfied with the standard, simplistic narrative in which science straightforwardly triumphs over superstition and religion. Here are some key excerpts which shed light on her answer:

The intense concentration on the complexities of universal harmony, which is one of the most characteristic aspects of Renaissance thought…so forcefully directed attention on number as the key to all nature that it may be said to have prepared the way for genuine mathematical thinking about the universe. As is well known, Kepler still saw his new astronomy in a context of harmonies, and he was well aware that the Pythagorean theory was also implicit in the Hermetic writings, of which he had made a careful study (151).

Copernicus introduces his [heliocentric] discovery to the reader as a kind of act of contemplation of the world as a revelation of God, or as what many philosophers have called the visible god. It is, in short, in the atmosphere of the religion of the world that the Copernican revolution is introduced (153).

Copernicus’ discovery came out with the blessing of Hermes Trismegistus upon its head, with a quotation from that famous work in which Hermes describes the sun-worship of the Egyptians in their magical religion (154-55). Bruno’s use of Copernicanism shows most strikingly how shifting and uncertain were the borders between genuine science and Hermeticism in the Renaissance. [This is] a theme which I believe may be of absolutely basic importance for the history of thought — namely, Renaissance magic as a factor in bringing about fundamental changes in the human outlook (155).

The mighty mathematician [Kepler] who discovered the elliptical orbits of the planets had, in his general outlook, by no means emerged from Renaissance influences. His heliocentricity had a mystical background; his great discovery about the planetary orbits was ecstatically welcomed by him as a confirmation of the music of the spheres; and there are survivals of animism in his theories (440).

Hence, it is now suggested, when “Hermes Trismegistus” and all that he stood for is rediscovered in the Renaissance, the return to the occult this time stimulates the genuine science. The emerging modern science is still clothed in what might be described as the Hermetic atmosphere (450).

Bruno was an out-and-out magician, an “Egyptian” and Hermetist of the deepest dye, for whom the Copernican heliocentricity heralded the return of magical religion…Through a Hermetic interpretation of Copernicus and Lucretius, Bruno arrives at his astonishing vision of an infinite extension of the divine as reflected in nature (451).

Drained of its animism, with the laws of inertia and gravity substituted for the psychic life of nature as the principle of movement, Bruno’s universe would turn into something like the mechanical universe of Isaac Newton, marvellously moving forever under its own laws placed in it by a God who is not a magician but a mechanic and a mathematician (451). It may be illuminating to view the scientific revolution as in two phases, the first phase consisting of an animistic universe operated by magic, the second phase of a mathematical universe operated by mechanics (452).

Yates concludes her book by astutely commenting on the ways in which all this affected Descartes, whose methodological dualism so fatefully separated mechanical or “inert” matter from animist or “spiritual” mind. This powerful legacy remains with us today, despite our alleged modernity and secularity.

I will conclude with two additional observations. First, Yates’ entire theme is proof in favor of Robin Horton’s continuity thesis, by which he argues that the links between traditional religion and modern science are deeper (both historically and structurally) than we frequently suppose. Second, there is irony in the fact that some modern cosmologists, particularly mathematical physicists, occasionally arrive at mystical or “spiritual” positions not so far removed from Bruno’s Hermetic universe. It’s magic, or math, as the case may ultimately be.

— Cris

galileo-mathematics-alphabet-of-universe

Did you like this? Share it:

Non-Instinctive Language

In terms of intellectual history, cognitive science is largely built on the Chomskyian idea that humans have an evolved language “instinct,” “organ,” or “module.” This facultative idea forms the premise of Steven Pinker’s bellwether book, The Language Instinct (1994), and establishes the foundation from which all manner of cognitive extensions have sprung. When Lawson and McCauley inaugurated the cognitive science of religion with Rethinking Religion: Connecting Cognition and Culture (1990), Chomsky was front and center. Given these origins, it would be a foundational problem if Chomsky was mostly wrong.

Chomsky’s claims have of course undergone considerable revision since he first presented them five decades ago. Some might even say that the revisions have been so considerable that almost nothing is left and that his current claim bears little or no resemblance to the initial claim. But the essential, or essentializing, residue of the initial claim remains in the form of an evolved language instinct and Universal Grammar that is somehow encoded in our genes and manifest in minds. So if this were to go, or shown to be wrong, what then?

Though he does not answer this particular question or slide down this slippery cognitive slope, linguistics professor Vyvyan Evans argues that Chomsky and Pinker are wrong: there is no language instinct. While I have not yet read Evans’ book, The Language Myth: Why Language is Not an Instinct (2014), I just read his Aeon article on the same topic. It’s a cogent statement of the criticisms that have been leveled against the notion of an evolved, if not encapsulated, language module. Here’s the bold lede:

Imagine you’re a traveller in a strange land. A local approaches you and starts jabbering away in an unfamiliar language. He seems earnest, and is pointing off somewhere. But you can’t decipher the words, no matter how hard you try.

That’s pretty much the position of a young child when she first encounters language. In fact, she would seem to be in an even more challenging position. Not only is her world full of ceaseless gobbledygook; unlike our hypothetical traveller, she isn’t even aware that these people are attempting to communicate. And yet, by the age of four, every cognitively normal child on the planet has been transformed into a linguistic genius: this before formal schooling, before they can ride bicycles, tie their own shoelaces or do rudimentary addition and subtraction. It seems like a miracle. The task of explaining this miracle has been, arguably, the central concern of the scientific study of language for more than 50 years.

In the 1960s, the US linguist and philosopher Noam Chomsky offered what looked like a solution. He argued that children don’t in fact learn their mother tongue – or at least, not right down to the grammatical building blocks (the whole process was far too quick and painless for that). He concluded that they must be born with a rudimentary body of grammatical knowledge – a ‘Universal Grammar’ – written into the human DNA. With this hard-wired predisposition for language, it should be a relatively trivial matter to pick up the superficial differences between, say, English and French. The process works because infants have an instinct for language: a grammatical toolkit that works on all languages the world over.

At a stroke, this device removes the pain of learning one’s mother tongue, and explains how a child can pick up a native language in such a short time. It’s brilliant. Chomsky’s idea dominated the science of language for four decades. And yet it turns out to be a myth. A welter of new evidence has emerged over the past few years, demonstrating that Chomsky is plain wrong.

While criticism of Chomsky is nothing new, this kind of full frontal assault is. Because it’s so contrarian and counter to received wisdom, I’m guessing many will be tempted to dismiss it without delving deeper or reading Evans’ book. This would be a mistake, as this article is only a sketch. I will say, however, that some of the strokes are pointed, if not compelling.

Banksy-Neo-Paleolithic

Did you like this? Share it:

Cognitive Maps & Brain Territories

Apropos to my last post on the status of cognitive science, or state of an emerging art, two recent articles address the issue from different disciplinary perspectives. The first, by psychologist Gary Marcus, biophysicist Adam Marblestone, and neuroscientist Jeremy Freeman, discusses the problems surrounding big-money and big-data brain mapping projects that are being touted as the next big thing in science. While the authors laud these projects, they are cautious about results:

But once we have all the data we can envision, there is still a major problem: How do we interpret it? A mere catalog of data is not the same as an understanding of how and why a system works.

When we do know that some set of neurons is typically involved in some task, we can’t safely conclude that those neurons are either necessary or sufficient; the brain often has many routes to solving any one problem. The fairy tales about brain localization (in which individual chunks of brain tissue correspond directly to abstract functions like language and vision) that are taught in freshman psychology fail to capture how dynamic the actual brain is in action.

One lesson is that neural data can’t be analyzed in a vacuum. Experimentalists need to work closely with data analysts and theorists to understand what can and should be asked, and how to ask it. A second lesson is that delineating the biological basis of behavior will require a rich understanding of behavior itself. A third is that understanding the nervous system cannot be achieved by a mere catalog of correlations. Big data alone aren’t enough.

Across all of these challenges, the important missing ingredient is theory. Science is about formulating and testing hypotheses, but nobody yet has a plausible, fully articulated hypothesis about how most brain functions occur, or how the interplay of those functions yields our minds and personalities.

Theory can, of course, take many forms. To a theoretical physicist, theory might look like elegant mathematical equations that quantitatively predict the behavior of a system. To a computer scientist, theory might mean the construction of classes of algorithms that behave in ways similar to how the brain processes information. Cognitive scientists have theories of the brain that are formulated in other ways, such as the ACT-R framework invented by the cognitive scientist John Anderson, in which cognition is modeled as a series of “production rules” that use our memories to generate our physical and mental actions.

The challenge for neuroscience is to try to square high-level theories of behavior and cognition with the detailed biology and biophysics of the brain.

This challenge is so significant, and difficult, that many cognitive scientists have bracketed it, or set it aside, as too complex. For tractability reasons, they construct cognitive models — and test them — without any reference to the actual brain. While this may be acceptable for a science in its relative infancy, it constitutes a bridging problem that cannot forever be ignored, or simplistically set aside as insoluble. Because neuroscientists are making impressive advances and approaching cognitive science from a biological direction, the two disciplines shall eventually meet. On whose terms, or on what theories, is yet to be decided.

In the second, computer scientist Jaron Lanier discusses the myth of artificial intelligence and “religion” built around the speculative hypothesis, or fear, of singularity. Ironically, the tech futurists who get all mystical about these issues are, in other aspects of their lives, devoted to applied technology that works and which of course makes money. Lanier, mindful of the fact that AI and cognitive science are cognate disciplines which, for all their impressive achievements, are not close to creating sentient machines or explaining human minds, is skeptical:

There’s a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don’t know how most kinds of thoughts are represented in the brain. We’re starting to understand a little bit about some narrow things. That doesn’t mean we never will, but we have to be honest about what we understand in the present.

This is something I’ve called, in the past, “premature mystery reduction,” and it’s a reflection of poor scientific mental discipline. You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you’re a lesser scientist. I don’t see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things. A great example is the Human Brain Project in Europe, which is a lot of public money going into science that’s very influenced by this point of view, and it has upset some in the neuroscience community for precisely the reason I described.

There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.

This is, or should be, good news. It’s good not because Elon Musk is probably wrong about the existential threats posed by AI, but because we acknowledge ignorance and ask the right kinds of questions. Answers will come, in due course, but our measures should be in decades, if not centuries. In the meantime, we should continuously remind ourselves that maps are not territories.

Map-Territory

Did you like this? Share it:

Genealogizing Cognitive Science

While preparing to write a chapter on the cognitive science of religion, I thought it would be a good idea to investigate the foundations of cognitive science before getting to the “religion” offshoot of it. My main concern was that the words “cognitive” and “science” cast a talismanic spell: when they ritually appear together, it is easy to assume that what follows is authoritative and firmly grounded in theory, method, and data. One of the best ways to conduct such an investigation, and test assumptions about authority, is to read histories of the field. Intellectual histories, which might also be called genealogies, examine the origins of an idea, or discipline, and trace its development over time. The best genealogies expose assumptions, examine conflicts, and raise doubts. They can be corrosive, undermine faith, and disrupt myths. Though its name may suggest otherwise, cognitive science is not without its fair share of faith and myth.

My purpose here is not to examine these in any detail, but to point interested readers to sources which may prompt healthy skepticism. A good place to start is with Howard Gardner’s The Mind’s New Science: A History of the Cognitive Revolution. Though it is a bit dated, having been published in 1985, it more than adequately covers the deep origins of cognitivism, in Cartesian-Kantian philosophy, and more recent origins in the 1950s with Chomsky’s revolt against behaviorism. It also covers the early debates and subsequent development of artificial intelligence or “AI,” which was originally wedded to cognitivism but has since gone mostly in separate algorithmic and engineering ways.

For the truly intrepid, I recommend Margaret Boden’s two-volume magnum opus, Mind as Machine: A History of Cognitive Science (2006). Though it is oddly organized and at times idiosyncratic, it covers just about everything. Because the chapters are weirdly named and the index rather sparse, finding precious bits within its 1,708 pages can be daunting. Fortunately, an internet search will lead you to a virtual copy of both volumes, which you can then search with Adobe’s tool for key words, names, or phrases.

Because Gardner and Boden are committed and practicing cognitivists, it may seem strange that their histories engender skepticism. Yet ironically they do. While the cognitivist enterprise identifies as science, situates itself within science, and uses scientific methods, these alone do not secure its status, or authority, as science in the manner of physics, chemistry, or even “messy” biology. The mind, in many discouraging ways, remains a mysterious black box.

While reading conflicting cognitivist accounts of the way the mind supposedly works — “mechanically” and “computationally” — nagging concerns arise about whether these literate-symbolic representations of inner-mental representations are scientific metaphors or descriptive analogues. Metaphors do not become scientific simply, or complicatedly, because we can model, mathematize, and chart them. There are also nagging concerns about whether tests of these models are investigating anything other than the symbols, or terms, which these models presuppose. It is hard to find satisfying or foundational empirical proof in this complex conceptual pudding. Of course many cognitivists eschew such proof because it muddles the models.

So just how does the mind work? Steven Pinker, a true cognitivist believer, thinks he knows, so I re-read his popular classic, How the Mind Works (1997). While skimming over the just-so evolutionary stories he is so fond of telling, I focused on his modularity theses and computational arguments. I could not help but think that minds might work the way he claims, or they might not. We cannot get inside heads to observe the logically elegant unfolding and symbolically impressive inferencing he describes. There is no direct data. We can see all sorts of behavioral outputs, but describing these with plausible models is not the same as explaining them with definitive proofs.

Like most cognitivists, Pinker has been greatly influenced by Noam Chomsky’s work in linguistics and Jerry Fodor’s early work on modularity. These were plausible models, in their day, but Chomsky’s has undergone so many major revisions that no one is really quite sure where he stands, and Fodor has rejected the massive modularity extension of his original proposals. This leaves Pinker, and his version of cognitivism, on rather shaky ground. It also led to Fodor’s rebuke in The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology (2001). Others, such as Kim Sterelny, have critiqued the massively modular-evolutionary model and offered alternative accounts. In Thought in a Hostile World: The Evolution of Human Cognition (2003), Sterelny states his particular case. Like most models, it is plausible though not compelling and certainly not definitive. None of the cognitive models command our acquiescence or obedience by virtue of scientific authority.

Where does this small sampling of sources leave us? Regardless of who is more right or less wrong, the fact that these and many other arguments exist – among the most accomplished scholars in cognitive science – tells us something important about the status of the field. The foundations are far from being settled. This also tells us something important, cautionary to be sure, about the cognitive science of religion.

A Boy Entering A Circuit Board Head

Did you like this? Share it: