Non-Instinctive Language

In terms of intellectual history, cognitive science is largely built on the Chomskyian idea that humans have an evolved language “instinct,” “organ,” or “module.” This facultative idea forms the premise of Steven Pinker’s bellwether book, The Language Instinct (1994), and establishes the foundation from which all manner of cognitive extensions have sprung. When Lawson and McCauley inaugurated the cognitive science of religion with Rethinking Religion: Connecting Cognition and Culture (1990), Chomsky was front and center. Given these origins, it would be a foundational problem if Chomsky was mostly wrong.

Chomsky’s claims have of course undergone considerable revision since he first presented them five decades ago. Some might even say that the revisions have been so considerable that almost nothing is left and that his current claim bears little or no resemblance to the initial claim. But the essential, or essentializing, residue of the initial claim remains in the form of an evolved language instinct and Universal Grammar that is somehow encoded in our genes and manifest in minds. So if this were to go, or shown to be wrong, what then?

Though he does not answer this particular question or slide down this slippery cognitive slope, linguistics professor Vyvyan Evans argues that Chomsky and Pinker are wrong: there is no language instinct. While I have not yet read Evans’ book, The Language Myth: Why Language is Not an Instinct (2014), I just read his Aeon article on the same topic. It’s a cogent statement of the criticisms that have been leveled against the notion of an evolved, if not encapsulated, language module. Here’s the bold lede:

Imagine you’re a traveller in a strange land. A local approaches you and starts jabbering away in an unfamiliar language. He seems earnest, and is pointing off somewhere. But you can’t decipher the words, no matter how hard you try.

That’s pretty much the position of a young child when she first encounters language. In fact, she would seem to be in an even more challenging position. Not only is her world full of ceaseless gobbledygook; unlike our hypothetical traveller, she isn’t even aware that these people are attempting to communicate. And yet, by the age of four, every cognitively normal child on the planet has been transformed into a linguistic genius: this before formal schooling, before they can ride bicycles, tie their own shoelaces or do rudimentary addition and subtraction. It seems like a miracle. The task of explaining this miracle has been, arguably, the central concern of the scientific study of language for more than 50 years.

In the 1960s, the US linguist and philosopher Noam Chomsky offered what looked like a solution. He argued that children don’t in fact learn their mother tongue – or at least, not right down to the grammatical building blocks (the whole process was far too quick and painless for that). He concluded that they must be born with a rudimentary body of grammatical knowledge – a ‘Universal Grammar’ – written into the human DNA. With this hard-wired predisposition for language, it should be a relatively trivial matter to pick up the superficial differences between, say, English and French. The process works because infants have an instinct for language: a grammatical toolkit that works on all languages the world over.

At a stroke, this device removes the pain of learning one’s mother tongue, and explains how a child can pick up a native language in such a short time. It’s brilliant. Chomsky’s idea dominated the science of language for four decades. And yet it turns out to be a myth. A welter of new evidence has emerged over the past few years, demonstrating that Chomsky is plain wrong.

While criticism of Chomsky is nothing new, this kind of full frontal assault is. Because it’s so contrarian and counter to received wisdom, I’m guessing many will be tempted to dismiss it without delving deeper or reading Evans’ book. This would be a mistake, as this article is only a sketch. I will say, however, that some of the strokes are pointed, if not compelling.


Did you like this? Share it:

Cognitive Maps & Brain Territories

Apropos to my last post on the status of cognitive science, or state of an emerging art, two recent articles address the issue from different disciplinary perspectives. The first, by psychologist Gary Marcus, biophysicist Adam Marblestone, and neuroscientist Jeremy Freeman, discusses the problems surrounding big-money and big-data brain mapping projects that are being touted as the next big thing in science. While the authors laud these projects, they are cautious about results:

But once we have all the data we can envision, there is still a major problem: How do we interpret it? A mere catalog of data is not the same as an understanding of how and why a system works.

When we do know that some set of neurons is typically involved in some task, we can’t safely conclude that those neurons are either necessary or sufficient; the brain often has many routes to solving any one problem. The fairy tales about brain localization (in which individual chunks of brain tissue correspond directly to abstract functions like language and vision) that are taught in freshman psychology fail to capture how dynamic the actual brain is in action.

One lesson is that neural data can’t be analyzed in a vacuum. Experimentalists need to work closely with data analysts and theorists to understand what can and should be asked, and how to ask it. A second lesson is that delineating the biological basis of behavior will require a rich understanding of behavior itself. A third is that understanding the nervous system cannot be achieved by a mere catalog of correlations. Big data alone aren’t enough.

Across all of these challenges, the important missing ingredient is theory. Science is about formulating and testing hypotheses, but nobody yet has a plausible, fully articulated hypothesis about how most brain functions occur, or how the interplay of those functions yields our minds and personalities.

Theory can, of course, take many forms. To a theoretical physicist, theory might look like elegant mathematical equations that quantitatively predict the behavior of a system. To a computer scientist, theory might mean the construction of classes of algorithms that behave in ways similar to how the brain processes information. Cognitive scientists have theories of the brain that are formulated in other ways, such as the ACT-R framework invented by the cognitive scientist John Anderson, in which cognition is modeled as a series of “production rules” that use our memories to generate our physical and mental actions.

The challenge for neuroscience is to try to square high-level theories of behavior and cognition with the detailed biology and biophysics of the brain.

This challenge is so significant, and difficult, that many cognitive scientists have bracketed it, or set it aside, as too complex. For tractability reasons, they construct cognitive models — and test them — without any reference to the actual brain. While this may be acceptable for a science in its relative infancy, it constitutes a bridging problem that cannot forever be ignored, or simplistically set aside as insoluble. Because neuroscientists are making impressive advances and approaching cognitive science from a biological direction, the two disciplines shall eventually meet. On whose terms, or on what theories, is yet to be decided.

In the second, computer scientist Jaron Lanier discusses the myth of artificial intelligence and “religion” built around the speculative hypothesis, or fear, of singularity. Ironically, the tech futurists who get all mystical about these issues are, in other aspects of their lives, devoted to applied technology that works and which of course makes money. Lanier, mindful of the fact that AI and cognitive science are cognate disciplines which, for all their impressive achievements, are not close to creating sentient machines or explaining human minds, is skeptical:

There’s a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don’t know how most kinds of thoughts are represented in the brain. We’re starting to understand a little bit about some narrow things. That doesn’t mean we never will, but we have to be honest about what we understand in the present.

This is something I’ve called, in the past, “premature mystery reduction,” and it’s a reflection of poor scientific mental discipline. You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you’re a lesser scientist. I don’t see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things. A great example is the Human Brain Project in Europe, which is a lot of public money going into science that’s very influenced by this point of view, and it has upset some in the neuroscience community for precisely the reason I described.

There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.

This is, or should be, good news. It’s good not because Elon Musk is probably wrong about the existential threats posed by AI, but because we acknowledge ignorance and ask the right kinds of questions. Answers will come, in due course, but our measures should be in decades, if not centuries. In the meantime, we should continuously remind ourselves that maps are not territories.


Did you like this? Share it:

Genealogizing Cognitive Science

While preparing to write a chapter on the cognitive science of religion, I thought it would be a good idea to investigate the foundations of cognitive science before getting to the “religion” offshoot of it. My main concern was that the words “cognitive” and “science” cast a talismanic spell: when they ritually appear together, it is easy to assume that what follows is authoritative and firmly grounded in theory, method, and data. One of the best ways to conduct such an investigation, and test assumptions about authority, is to read histories of the field. Intellectual histories, which might also be called genealogies, examine the origins of an idea, or discipline, and trace its development over time. The best genealogies expose assumptions, examine conflicts, and raise doubts. They can be corrosive, undermine faith, and disrupt myths. Though its name may suggest otherwise, cognitive science is not without its fair share of faith and myth.

My purpose here is not to examine these in any detail, but to point interested readers to sources which may prompt healthy skepticism. A good place to start is with Howard Gardner’s The Mind’s New Science: A History of the Cognitive Revolution. Though it is a bit dated, having been published in 1985, it more than adequately covers the deep origins of cognitivism, in Cartesian-Kantian philosophy, and more recent origins in the 1950s with Chomsky’s revolt against behaviorism. It also covers the early debates and subsequent development of artificial intelligence or “AI,” which was originally wedded to cognitivism but has since gone mostly in separate algorithmic and engineering ways.

For the truly intrepid, I recommend Margaret Boden’s two-volume magnum opus, Mind as Machine: A History of Cognitive Science (2006). Though it is oddly organized and at times idiosyncratic, it covers just about everything. Because the chapters are weirdly named and the index rather sparse, finding precious bits within its 1,708 pages can be daunting. Fortunately, an internet search will lead you to a virtual copy of both volumes, which you can then search with Adobe’s tool for key words, names, or phrases.

Because Gardner and Boden are committed and practicing cognitivists, it may seem strange that their histories engender skepticism. Yet ironically they do. While the cognitivist enterprise identifies as science, situates itself within science, and uses scientific methods, these alone do not secure its status, or authority, as science in the manner of physics, chemistry, or even “messy” biology. The mind, in many discouraging ways, remains a mysterious black box.

While reading conflicting cognitivist accounts of the way the mind supposedly works — “mechanically” and “computationally” — nagging concerns arise about whether these literate-symbolic representations of inner-mental representations are scientific metaphors or descriptive analogues. Metaphors do not become scientific simply, or complicatedly, because we can model, mathematize, and chart them. There are also nagging concerns about whether tests of these models are investigating anything other than the symbols, or terms, which these models presuppose. It is hard to find satisfying or foundational empirical proof in this complex conceptual pudding. Of course many cognitivists eschew such proof because it muddles the models.

So just how does the mind work? Steven Pinker, a true cognitivist believer, thinks he knows, so I re-read his popular classic, How the Mind Works (1997). While skimming over the just-so evolutionary stories he is so fond of telling, I focused on his modularity theses and computational arguments. I could not help but think that minds might work the way he claims, or they might not. We cannot get inside heads to observe the logically elegant unfolding and symbolically impressive inferencing he describes. There is no direct data. We can see all sorts of behavioral outputs, but describing these with plausible models is not the same as explaining them with definitive proofs.

Like most cognitivists, Pinker has been greatly influenced by Noam Chomsky’s work in linguistics and Jerry Fodor’s early work on modularity. These were plausible models, in their day, but Chomsky’s has undergone so many major revisions that no one is really quite sure where he stands, and Fodor has rejected the massive modularity extension of his original proposals. This leaves Pinker, and his version of cognitivism, on rather shaky ground. It also led to Fodor’s rebuke in The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology (2001). Others, such as Kim Sterelny, have critiqued the massively modular-evolutionary model and offered alternative accounts. In Thought in a Hostile World: The Evolution of Human Cognition (2003), Sterelny states his particular case. Like most models, it is plausible though not compelling and certainly not definitive. None of the cognitive models command our acquiescence or obedience by virtue of scientific authority.

Where does this small sampling of sources leave us? Regardless of who is more right or less wrong, the fact that these and many other arguments exist – among the most accomplished scholars in cognitive science – tells us something important about the status of the field. The foundations are far from being settled. This also tells us something important, cautionary to be sure, about the cognitive science of religion.

A Boy Entering A Circuit Board Head

Did you like this? Share it:

Ancestral Trees & Cultural Forests

Note: This is a guest post by John Balch, a graduate student in the Religion Department at the University of Florida and one of my former students. John is now studying under the supervision of Professor Bron Taylor, author of Dark Green Religion: Nature Spirituality and the Planetary Future.


One of the central hypotheses of the cognitive science of religion is that anthropomorphism is a “general, spontaneous, and unconscious interpretive tendency” (Guthrie 1993:37) of the human mind to project agency onto inanimate matter, and this process is one of the psychological roots for the evolution of religious belief (Boyer 2001; Barrett 2004). This theory has a long history in the field of religious studies, and its contemporary form essentially recasts an idea found in the work of writers like David Hume, Charles Darwin, and Edward Burnett Tylor into an explicitly cognitive framework. In spite of many discontinuities between these two groups of writers, they share one important assumption; namely, that animism is principally a misfire or mistake of the human brain, a spandrel or by-product of our cognitive architecture that can be corrected by rationality and empiricism.

If this is the case, a question arises: What are we to make of anthropomorphic acts that are specific and “planned and performed consciously” like those David Haberman (2012:25) describes in his ethnography of tree worship in northern India? These strategic anthropomorphic practices, which include the molding of facemasks onto trees, are undertaken in order “to better relate to them” and establish continuity between human and non-human worlds.

Nested within the theoretical framework of the anthropomorphic theory is the assumption that the world clearly divides into separate realms of “Nature” and “Culture.” By this assumption, the alleged “mistake” of animism is its ascription of agency, motivation, and mentality (which are placed on the side of Culture) onto a world determined by biological and physical processes (which are placed on the side of Nature). This partition, which has become endemic within modern society, has a long pedigree in Western history. It also has far-reaching implications for the relationship between colonialism, indigenous peoples, and contested environments.

When European travelers and colonists first explored the continents that they dubbed “The New World,” they were stunned by what seemed to be a “hideous and desolate wilderness, full of wild beasts and men” (Nash 2001:9). As the colonization of the Americas continued, this antagonistic view of nature would be complemented by a growing appreciation for the wildernesses of the continents, exemplified in the writings of American authors like Ralph Waldo Emerson, Henry David Thoreau, and John Muir.

What these two perspectives held in common, however, was the perception that these places and landscapes were untamed and undomesticated, entropic blind spots of the Earth from the transformational spread of humanity. As a result, the inhabitants of these areas were either lauded for their inherent conservationism (the Noble Savage motif) or denigrated for their supposed inferiority at utilizing the resources of the landscape (the Primitive Savage motif). Regardless of whether Native Americans were considered to have revered it or were simply unable to tame it, Nature, in the European mind, remained “pristine” on the other side of the boundary demarcating it from the activities of European culture.

In spite of the distance we may now feel from these attitudes, this paradigm can still be seen in many standard models of conservation, which create “wilderness” preserves in order to diminish or eradicate anthropic effects within those biotic regions. In addition to keeping European tourists at bay, the establishment of these areas often involved the deracination of native peoples, and conflicts over indigenous rights to natural resources has continued unabated as conservation efforts have spread in Africa and South America (Spence 1999). Particularly common in this literature is the idea that certain areas of the planet are “pristine” or “primeval,” and should therefore be preserved in order to maintain these sites as refuges of biological diversity. They are set aside, supposedly “untouched,” to offset the destruction of the rest of the natural world from the activities of technologically complex human society.

Of course, indigenous societies were neither unable to alter their environment nor inherently conservationist. Instead, like other human groups, many indigenous societies extensively modified “Nature.” This paradigm is perhaps most spectacularly demonstrated in the work of archaeologists, anthropologists, and geographers on the Amazon who have concluded that the rainforest underwent extreme modifications, leading William Balée to conclude that at least 11.8% of the Amazonian rainforest landscape exists as the result of human activity (Balée 2013:3). Finding the term “anthropogenic” to be too limited for the proper understanding of the dynamic of these transformations across time (especially following the decimation of the native populations after the introduction of European diseases), Balée coined the term “Cultural Forest” to describe the way in which the Amazon is not only a “rich realm of nature,” but a rich realm of culture as well (Balée 2013:2).

By demonstrating the inadequacy of the Nature/Culture paradigm to analyze ecosystem management in the Amazon, Balée and other Historical Ecologists provide the grounds for the healthy criticism of naturalistic viewpoints without slipping into the vacuous labyrinths of post-modern and post-colonial discourse. In a similar vein, Philippe Descola has argued that a dualistic approach is inappropriate in the interpretation of contemporary ethnography, stating that many Amazonian groups “regard themselves, not as social collectives managing their relations with the ecosystem, but rather as simple components of a vaster whole within which no real discrimination is really established between humans and nonhumans” (Descola 2013:21).

Far from being limited to the New World, this paradigm comes into play again in the discussion surrounding “sacred” groves and forest islands in the African Savanna. Specifically, Michael Sheridan claims that the stereotypical view of sacred groves as “examples par excellence of ahistorical cultural and ecological equilibria” is being supplanted by a perspective that views these groves are “sites where ecological, social, and political symbolic dynamics intersect” (2008:10). Phrased differently, “tropical forests are not simply relics of primeval forests, and contemporary African religions are not simply relics of pre-colonial ideas and practices” (Sheridan 2008:13).

This line of thought is powerfully supported by James Fairhead and Melissa Leach (1996) in their landmark book, Misreading the African Landscape. They found that a significant portion of forest islands in Western Africa were not “primeval relics” of a disappearing forest landscape, but actually the result of careful nurturance and management by the indigenous inhabitants. These ecosystemic practices intersect with the cosmology and religion of natives. Fairhead and Leach draw attention to the myths of “founding trees,” or the first tree that was planted by the first patrilineal ancestor in a forest island, which symbolizes for his descendants their claim over that land. In addition to being a powerful status symbol within the human world, stories around these trees “recall(s) the establishment of a relationship — almost a contract — with the area’s land spirits: a relationship maintained ritually by a founder’s descendants to ensure a place both for human settlement and reproduction” (1996:89).

Importantly, the role of this tree in solidifying and symbolizing the history of the relationship between this culture and its environment cannot neatly be explained by a theory of spontaneous anthropomorphism, and the influence of the tree’s religious significance on the natural and biological processes of its ecosystem challenges any clear division of causation between cultural and natural realms of activity. Rather than reinforcing the barriers between Culture and Nature, ascriptive animists actively and consciously form relationships with dynamic non-human partners. Rather than seeing “Nature” as an undifferentiated mass, animists establish networks of engagements that find their expression in cosmologies, rituals, and experiences of non-human personhood.

While great emphasis has been placed on the spontaneous anthropomorphic tendencies of the human brain, we should more seriously consider the ways in which animists actively foster and bolster these perceptions, often with adaptive ecological effects. While the perception of “spirits” in the non-human world could be a brain-based misfire, or spandrel of human consciousness, these might also be the product of the natural human tendency to enter into intensely emotional and personal relationships with the non-human world.


Works Cited

Balée, William L. 2013. Cultural Forests of the Amazon: A Historical Ecology of People and Their Landscapes. Tuscaloosa: University Alabama Press.
Barrett, Justin L. 2004. Why Would Anyone Believe in God? Walnut Creek, CA: AltaMira Press.
Boyer, Pascal. 2001. Religion Explained: The Evolutionary Origins of Religious Thought. New York: Basic Books.
Brightman, Marc, Vanessa Elisa Grotti, and Olga Ulturgasheva (eds). 2014. Animism in Rainforest and Tundra: Personhood, Animals, Plants and Things in Contemporary Amazonia and Siberia. Oxford: Berghahn Books.
Descola, Philippe. 2013. Beyond Nature and Culture. Trans. by Janet Lloyd. Chicago, IL: The University of Chicago Press.
Fairhead, James, and Melissa Leach. 1996. Misreading the African Landscape: Society and Ecology in a Forest-Savanna Mosaic. Cambridge ; New York: Cambridge University Press.
Guthrie, Stewart. 1993. Faces in the Clouds: A New Theory of Religion. New York: Oxford University Press.
Haberman, David L. 2013. People Trees: Worship of Trees in Northern India. New York: Oxford University Press.
Sheridan, Michael J. 2008. “The Dynamics of African Sacred Groves: Ecological, Social, and Symbolic Processes.” In African Sacred Groves: Ecological Dynamics and Social Change, edited by Celia Nyamweru and Michael Sheridan, 9–41. Oxford : Athens, OH : Pretoria: Ohio University Press.
Spence, Mark David. 1999. Dispossessing the Wilderness: Indian Removal and the Making of the National Parks. New York: Oxford University Press.

Did you like this? Share it:

Insects Shall Inherit the Earth

A massive new study in Science has resolved the timing and pattern of insect evolution. Insects originated ~479 million years ago (mya), some began flying ~406 mya, the major extant lineages appeared ~345 mya, and metamorphic insects appeared ~140 mya. This fossil and gene based phylogeny significantly advances our understanding of the tree of life, which is better conceived as the ball of life. When considering this ball, it is sobering to realize that microbes are the most speciose and certainly the most important — Metazoans or Animalia are entirely dependent on microbes. If microbes were to go extinct, all other life forms would follow in short order.

It is equally sobering to realize that there are 1,659,420 known species in the animal kingdom and the most successful or diverse phylum is Arthropoda, which includes insects, arachnids, and crustaceans. A recent taxonomic survey indicates there are 1,302,809 species of arthropods, or about 78.5% of the total. Mollusca is the second largest phylum with 118,061 species. Craniata, which includes vertebrates, consists of 85,432 species (including 19,974 fossil species), or about 5 percent of the total. Among these, there are 35,644 species of fishes, 7,171 species of amphibians, 15,507 species of reptiles, 11,087 species of birds, and 16,014 species of mammals. The latter, to which most of us default when we think or say “animals,” represents 1 percent of the total. Within this one percent, there are ~300 primate species, a number so small that it looks like this: .00024 of the total. Among primates, there is only a single species of extant hominin: Homo sapiens.

Because this species is the only one which studies and writes about evolution, our histories tend to be progressive. As we usually tell the story, the evolution of life has been largely about trends towards complexity and intelligence: the evolution of life is presented as a glorious unfolding toward us, and perhaps our favorite fuzzy mammalian relatives. This is of course solipsistic and unwarranted. If microbes, insects, fishes and that often forgotten phylogenetic group of organisms we call “plants” could write evolutionary history, the stories would be much different.

Life originated on earth in non-complex or simple forms and, after over 3 billion years of evolution, the vast majority of life remains non-complex and relatively simple. When all organisms are considered and not just a small number of “advanced” or complex outliers, evolution does not present as progressive. Seen from this larger perspective, there have not been any generalized trends.

Long before we were here, simple or “primitive” forms dominated earth. Now that we are here, they still dominate it, even if our primate-vision prejudices prevent us from seeing it or them. Long after we have finished our epoch making domination, or when the Anthropocene runs its sloughing course, the new epoch will get a more traditional name. “Insectocene” seems as good, and likely, as any. These simple forms have always been dominant on earth, so they cannot really inherit what has always been theirs, but they will certainly inherit an earth substantially different from the one they have known for the past 12,000 years. But they will adapt and diversify, just like they always have, through the earth’s many epochs, cycles, and perturbations. From their perspective, future prospects might even look like progress.


Did you like this? Share it:

Gutting on God

Over the past ten months the New York Times philosophy blog, “The Stone,” hosted an interview series on religion. It was conducted by Notre Dame philosophy professor Gary Gutting, whose parochial interests are such that most of the questions were narrowly God-centric. Throughout the series Gutting seemed flummoxed by the fact that the philosophers he interviewed were not much interested in metaphysical arguments for the existence of God, and were not particularly concerned about the rationality or logic of such arguments. These are of course major concerns among a tiny subset of philosophers or theologians, such as those found at Notre Dame and the Vatican.

In his penultimate interview with Princeton philosopher Daniel Garber, Gutting posed the scholastic kind of question that has been the dreary hallmark of the series. Garber’s answer, while not quite dismissive, is deflationary:

G.G.: So are you saying that the philosophical books are closed on the traditional theistic arguments? Have atheistic philosophers decisively shown that the arguments fail, or have they merely ceased thinking seriously about them?

D.G.: Certainly there are serious philosophers who would deny that the arguments for the existence of God have been decisively refuted. But even so, my impression is that proofs for the existence of God have ceased to be a matter of serious discussion outside of the domain of professional philosophy of religion. And even there, my sense is that the discussions are largely a matter of academic interest: The real passion has gone out of the question.

This would have been a fitting conclusion to the series had not Gutting wrapped the whole, with a thirteen installment, by interviewing himself. It is interesting primarily as a psychological exhibit: when one wants to believe in God, or feels the need for a false binary (theist-atheist) position on God, then all manner of intellectual gymnastics and normative conclusions are bound to follow.

Perhaps the best that can be said of all this, and Gutting’s interview series, is that belief in God can be considered “rational.” But when making this claim, it is important to remember that “rationality” is an historically situated, philosophically technical, and ideologically loaded concept developed over the last four centuries by (mostly Christian) philosophers in the West. For nearly everyone else, which is to say the 99.9 percent of the people who have ever lived or are now living, these arguments and considerations simply are not relevant, however “rational” they may be.


Did you like this? Share it:

Myrmecology & Theology

When the world’s leading myrmecologist writes about ants, evolution, and ecology, it’s fine indeed. But when E.O. Wilson opines on matters beyond his expansive scientific expertise, it is usually less enlightening. Over at National Geographic, where Wilson talks about his new book, we have an example of the latter. In that book, The Meaning of Human Existence, Wilson attempts to answer a question which arose and became pressing only in those places significantly impacted by the Enlightenment, Industrial Revolution, and Consumer Capitalism. Because existential meaning often entails cosmological considerations, Wilson feels compelled to “explain” religion:

You say that we were created not by a supernatural intelligence but by chance and necessity. This puts you at odds with most of the world’s religions. Why are they wrong?

They’re very wrong. And it’s urgently the time to enter into frank discussion over why they’re wrong. But we don’t generally allow it to be discussed, because too many people would be offended. Let me make this point, though. There’s already a neurobiology of religion and religious belief in the scientific realm. What are the genetics and evolutionary origins of religion, and exactly why is it a certain form?

I think when we get deep enough, we’re going to see that humanity shares a predilection for certain big questions accompanied by deep emotional responses, which are biological in origin. I would call them theological, or transcendent, concerns common to human beings everywhere. Is there a supreme being who created us and guides us in some manner? Will we have an afterlife? These are the big questions.

But there’s also the creation myth. And where I would call the transcendent forms of religion authentic and typical of human beings, I would call the individual beliefs, or faith, as coming from an entirely different origin. The faith of organized religions, hundreds of them, consist substantially of the creation myth that they champion.

And without exception, they’re convinced that the creation myth and supernatural stories of their faith are superior to all others, no matter how gentle, no matter how generous or caring a particular faith is. It is the holder of the truth.

Why is this the case? Because people have ingrained in them, genetically, a tendency to believe stories that unite their group, define their group, and allow them to flourish within the power sphere of that group. And this is the simple, straightforward origin of religious faith.

This brings to mind H.L. Mencken’s sage observation: “For every complex problem there is an answer that is clear, simple, and wrong.” Wilson’s simple origins story reflects his belief in group level selection and cultural evolutionist idea that religions are adaptations which enable the formation, cohesion, and legitimation of large-scale societies. This could be correct, though the argument is controversial and far from settled. Even if this gene-culture evolutionary explanation is correct, it’s only part of the answer.

When it comes to modern forms of “religion,” or those which humans have developed over the past 2,500 years, straightforward monocausal answers won’t suffice. So why is Wilson telling a simplistic origins story? I suspect it’s because his primary model for “religion” derives from monotheistic traditions which are exclusivist. The giveaway here is Wilson’s persistent use of the terms “truth” and “faith,” which are concepts that particularly derive from Abrahamic religions. This probably also accounts for Wilson’s sense that organized religion tends toward tribalism and intolerance. While this has historically been true of monotheistic traditions, we need only think of polytheistic Greece and Rome to know that it is not true of others. While the Greeks and Romans had many reasons for their conquest and enslavement of others, they did not go to war because the gods demanded it or faith required it. Profane decisions, not sacred duties, originally drove this competitive process.


Did you like this? Share it: