I had an interesting experience at a conference I attended last week. The topic of the conference was collective computation—how swarms of robots, say, solve complicated tasks. Social insects do this too, and I was invited to talk about that. I made the argument that maybe the brain-as-computer metaphor might be missing something important about how social assemblages solve problems. Maybe there’s intentionality? Purpose? Desire? None of these easily ascribable to computational machines. It was one of those marvelous small conferences, a small number of diverse and interesting attendees, lots of time for interesting and deep discussions over meals or in a pub.
One of those conversations underscored for me an argument I made in Purpose and Desire, namely that modern biology (evolutionary and other) is facing a crisis because it has trapped itself so deeply in a philosophical bubble that it has left the science of biology (evolutionary and other) alienated from the very phenomenon it purports to study. Students of life, in other words, do not really study life itself, they study a philosophical preconception of what life should be. Currently that preconception is the metaphor of life as a machine, a very complicated mechanism of matter in motion and nothing more.
Among other things, consistent adherence to that idea has left biology bereft of the ability to distinguish life from non-life. Drawing such a distinction would seem to be the basic prerequisite for being able to define what life is, and as important, to define what life is not. A purely mechanistic philosophy of life leads, therefore, to being unable to draw any meaningful distinction between, say, a cauliflower and a cumulus cloud. It also allows glaring incoherencies to inhabit one’s mind.
This issue came up at one of our after-dinner conversations. I was making the argument against the brain-as-computer metaphor. One of my conversation partners was a very accomplished neuroscientist who was defending the metaphor. Eventually, we worked our way toward the question of free will. I was making the point that free will was impossible in the brain-as-computer metaphor. Even in very sophisticated computational systems like neural networks, Darwinian algorithms, deep-learning networks, all mainstays of current trends in artificial intelligence and artificial life, these remained human-constructed machines, motivated at some point by choices (freely made, I was arguing) made by their creators. But there was nothing in the machine (algorithm) per se that expressed free will.
My interlocutor didn’t buy it. He was totally committed to the notion of the brain-as-computer. His position was that, no, free will, even in ourselves, is an illusion, something our machine brain has tricked us into believing.
I give him enormous credit for intellectual honesty and consistency, but really?
Here was someone highly intelligent, very accomplished, a distinguished neuroscientist who had built a very successful career for himself. He presumably had arrived at that point through a life-long series of choices, deliberately thought out, contemplated and executed. What must it take to believe that none of that was free-will? That it was all the result of an unseen puppet master lurking in the meatware inside his skull, a demon who was in control of his course through life?
I can only stare in wonder …