An open access publication of the American Academy of Arts & Sciences
Fall 2004

Philosophy-envy

Author
Richard McKay Rorty
View PDF

Richard Rorty, a Fellow of the American Academy since 1983, is professor of comparative literature and philosophy at Stanford University, as well as a regular contributor to “The Nation” and “Dissent.” His books include “Philosophy and the Mirror of Nature” (1979), “Contingency, Irony, and Solidarity” (1989), and, most recently, “Philosophy and Social Hope” (1999).

When philosophers like Ortega y Gasset say that we humans have a history rather than a nature, they are not suggesting that we are blank slates. They do not doubt that biologists will eventually pin down the genetic factor in autism, homosexuality, perfect pitch, lightning calculation, and many other traits and abilities that differentiate some humans from others. Nor do they doubt that, back in the days when our species was evolving its way into existence on the African savannas, certain genes were weeded out and others preserved. They can cheerfully agree with scientists like Steven Pinker that the latter genes account for various sorts of behavior common to all human beings, regardless of acculturation.

What these philosophers doubt is that either factoring out the role of genes in making us different from one another, or tracing what we have in common back to the evolutionary needs of our ancestors, will give us anything appropriately labeled ‘a theory of human nature.’ For such theories are supposed to be normative – to provide guidance. They should tell us what to do with ourselves. They should explain why some lives are better for human beings than other lives, and why some societies are superior to others. A theory of human nature should tell us what sort of people we ought to become.

Philosophical and religious theories of human nature flourished because they stayed clear of empirical details. They took no chances of being disconfirmed by events. Plato’s and Aristotle’s theories about the parts of the soul were of this sort, and so were Christianity’s theory that we are all children of a loving God, Kant’s theory that we are phenomenal creatures under noumenal command, and Hobbes’s and Freud’s naturalizing stories about the origins of sociality and of morality. Despite their lack of predictive power and empirical disconfirmability, such theories were very useful – not because they were accurate accounts of what human beings, deep down, really and truly are, but because they suggested perils to avoid and ideals to serve. They marketed helpful moral and political advice in fancy, disposable, packaging.

Steven Pinker is trying to recycle this packaging, wrapping it around a miscellany of empirical facts rather than around a vision of the good life or of the good society. But it is hard to see how a composite, or a synthesis, of the various empirical disciplines that now call themselves cognitive sciences could serve the purposes that religion and philosophy once served. The claim that what the philosophers did a priori and badly can now be done a posteriori and well by cognitive scientists will remain empty rhetoric until its adherents are willing to stick their necks out. To make good on the promise of the term ‘a scientific theory of human nature’ they would have to start offering advice about how we might become, individually or collectively, better people. Then they would have to spell out the inferences that had led them from particular empirical discoveries about our genes or our brains to these particular practical recommendations.

E. O. Wilson, Pinker, and others who think that biology and cognitive science can take over at least part of the cultural role of philosophy are reluctant to start down this path. They remember the fate of the eugenics movement – of claims to have ‘proved scientifically’ that interracial marriage, or increased immigration, would produce cultural degeneration. Recalling this obnoxious predecessor makes them leery of betting the prestige of their disciplines on the outcome of practical recommendations. Instead, they just repeat over and over again that as we learn more and more about our genes and our brains, we shall gain a better understanding of what we essentially are.

But for historicist philosophers like Ortega there is nothing we essentially are. There are many lessons to be learned from history, but no super-lesson to be learned from science, or religion, or philosophy. The unfortunate idea that philosophy could detect the difference between nature and convention – between what is essential to being a human being and what is merely a product of historical circumstance – was passed on from Greek philosophy to the Enlightenment. There it reappeared, in a version that would have disgusted Plato, in Rousseau. But in the last two centuries the notion that beneath all cultural overlays there lurks something called human nature, and that knowledge of this thing will provide valuable moral or political guidance, has fallen into deserved disrepute.

Dewey was right to mock Plato’s and Aristotle’s claims that the contemplative life was the one that best utilized our distinctively human abilities. Such claims, he said, were merely ways in which these philosophers patted themselves on the back. Ever since Herder, the Rousseauvian claim that the aim of sociopolitical change should be to bring us back to uncorrupted nature has been rejected by thinkers impressed by the extent, and the value, of cultural variation. The idea, shared by Plato and Rousseau, that there is such a thing as the good life for man has gradually been replaced by the conviction that there are many equally valuable human lives. This change has resulted in our present conviction that the best sociopolitical setup is one in which individuals are free to live whichever of these lives they choose – to make themselves up as they go along, without asking what they were somehow ‘meant’ to become. It has also resulted in religion and philosophy being nudged aside by history, literature, and the arts as sources of edification and of ideals.

Carl Degler’s In Search of Human Nature: The Decline and Revival of Darwinism in American Social Thought tells the story of the biologists’ attempts to move onto some of the turf from which the philosophers have been withdrawing. Darwinism revealed previously unsuspected continuities between humans and brutes, and these made it seem plausible that further biological research could tell us something morally significant. In a chapter called “Why Did Culture Triumph?” Degler explains how the overweening pretensions of the eugenicists, and the futile attempt to stem the tide of feminism by appeals to biological facts about the differing ‘natures’ of men and women, helped to discredit this suggestion. Then, in a chapter called “Biology Redivivus,” he describes how sociobiologists and their allies have been trying to push the pendulum back in the other direction.

Degler ends his book on an ecumenical note, endorsing what Pinker calls holistic interactionism. But many of his readers will conclude that the moral of the story he tells is that “nature or nurture?” was never a very good question. Darwin did make a tremendous difference to the way we think about ourselves, because he discredited religious and philosophical accounts of a gap between the truly human and immaterial part of us and the merely animal and material part. But nothing Darwin taught us blurs the distinction between what we can learn from the results of biological and psychological experiments and what we can only learn from history – the record of past intellectual and social experiments.

Pinker is right that the nature vs. nurture debate will not go away as long as the question is raised in respect to some very particular type of human behavior – autism, for example. But at more abstract levels, such debates are vacuous. They are rhetorical exchanges occasioned by academic turf wars. The question “Is our humanity a biological or a cultural matter?” is as sterile as “Are our actions determined or do we have free will?” No concrete result in genetics, or physics, or any other empirical discipline will help us answer either bad question. We will go right on deliberating about what to do, and holding each other responsible for actions, even if we become convinced that every thought we have, and every move we make, will have been predicted by an omniscient neurologist. We will go right on experimenting with new lifestyles, new ideas, and new social institutions, even if we become convinced that, deep down, everything somehow depends on our genetic makeup. Discussion of the nature- nurture question, like discussion of the problem of free will, has no pragmatic import.

Pinker says, correctly, that there is a “widespread desire that the whole [nature-nurture] issue would somehow just go away” and an equally widespread suspicion that to refute a belief in the blank slate is “to tip over a straw man.” Readers of Degler will be disposed to share both that desire and that suspicion. Pinker hopes to change their minds by tipping over other straw men: “postmodernism and social constructionism, which dominate many of the humanities.” But it is hard to think of any humanist – even the most far-out Foucauldian – who would endorse the view, implausibly attributed by Pinker to Louis Menand, that “biology can provide no insight into human mind and behavior.” What Foucault, Menand, and Ortega doubt is that insights provided by biology will ever help us decide which individual and social ideals to strive for.

Pinker thinks that science may succeed where philosophy has failed. To make his case, however, he has to treat platitudes as gee-whiz scientific discoveries. He says, for example, that “cognitive science has shown that there must be complex innate mechanisms for learning and culture to be possible.” Who ever doubted there were? We already knew, before cognitive science came along, that you cannot teach young nonhuman animals to do things that you can teach young humans to do. We figured out a long time ago that if an organism had one kind of brain we could teach it to talk, and that if it had another kind we could not. Yet Pinker writes as if people like Menand were committed to denying evident facts such as these.

Again, Pinker cites recent suggestions that the circle of organisms that are objects of our moral concern “may be expanded to include people to whom one is bound by networks of reciprocal trade and interdependence, and . . . contracted to exclude people who are seen in degrading circumstances.” But we did not need recent scientific research to tell us about these “possible levers for humane social change.” The relevance of interdependence to the way we treat foreign traders, and of degradation to the way we treat prisoners of war, is hardly news. People have been recommending trade and intermarriage as a way of achieving wider community for a long time now. For an equally long time, they have been suggesting that we stop degrading people in order to have an excuse for oppressing them. But Pinker describes facts familiar to Homer and Herodotus as exhibiting “nonobvious aspects of human nature.”

It is likely that further discoveries about how our brains work will give us a lot of useful ideas about how to change human behavior. But suppose that nanotechnology eventually enables us to trace the transmission of electrical charges from axon to axon within the living brain, and to correlate such processes with minute variations in behavior. Suppose that we become able to modify a person’s behavioral dispositions, in pretty much any way we like, just by tweaking her brain cells. How will this ability help us figure out what sort of behavior to encourage and what sort to discourage – to know how human beings should live? Yet that sort of help is just what philosophical theories of human nature claimed to provide.

Pinker says at various places in The Blank Slate that everybody has and needs a theory of human nature, and that empirical scientific inquiry is likely to give us a better theory than either uninformed common sense or a priori philosophizing. But it is not clear that we have or need anything of the sort. Every human being has convictions about what matters more and what matters less, and thus about what counts as a good human life. But such convictions need not – and should not – take the form of a theory of human nature, or a theory of anything else. Our convictions about what really matters are constantly modified by new experiences – moving from a village to a city or from one country to another, meeting new people, and reading new books. The idea that we deduce them, or should deduce them, from a theory is a Platonist fantasy that the West has gradually outgrown.

The books that change our moral and political convictions include sacred scriptures, philosophical treatises, intellectual and sociopolitical histories, epic poems, novels, political manifestoes, and writings of many other sorts. But scientific treatises have become increasingly irrelevant to this process of change. This is because, ever since Galileo, natural science has won its autonomy and its richly deserved prestige by telling us how things work, rather than, as Aristotle hoped to do, telling us about their intrinsic natures.

Post-Galilean science does not tell us what is really real or really important. It has no metaphysical or moral implications. Instead, it enables us to do things that we had not previously been able to do. When it became empirical and experimental, it lost both its metaphysical pretensions and the ability to set new ends for human beings to strive for. It gained the ability to provide new means. Most scientists are content with this trade-off. But every so often a scientist like Pinker tries to have it both ways, and to suggest that science can provide empirical evidence to show that some ends are preferable to others.

Whereas physics-envy is a neurosis found among those whose disciplines are accused of being soft, philosophy-envy is found among those who pride themselves on the hardness of their disciplines. The latter think that their superior rigor qualifies them to take over the roles previously played by philosophers and other sorts of humanists – roles such as critic of culture, moral guide, guardian of rationality, and prophet of the new utopia. Humanists, such scientists argue, only have opinions, but scientists have knowledge. Why not, they ask us, stop your ears against culture-babble (which is all you are going to get from those frivolous postmodernists and irresponsible social constructionists) and get your self-image from the people who know what human beings really, truly, objectively, enduringly, transculturally are?

Those who succumb to such urgings are subjected to bait-and-switch tactics. They think they will learn whether to be more like Antigone than like Ismene, or more like Martha than like Mary, or more like Spinoza than like Baudelaire, or more like Lenin than like FDR, or more like Ivan Karamazov than like Alyosha. They want to know whether they should throw themselves into campaigns for world government, or against gay marriage, or for a global minimum wage, or against the inheritance tax. They hope for the sort of guidance that idealistic freshmen still think their teachers may be able to provide. When they take courses in cognitive science, however, this is not what they get. They get a better understanding of how their brains work, but no help in figuring out what sort of people to be or what causes to fight for.

This sense that they have been subjected to bait-and-switch tactics often also afflicts freshmen who sign up for philosophy courses because they have been turned on by Marx, Camus, Kierkegaard, Nietzsche, or Heidegger. They imagine that if they take a course in what are advertised as ‘the core areas of philosophy’ – metaphysics and epistemology – they will be better able to answer the questions these authors raised. But what they get in such courses is, typically, a discussion of the place of such things as knowledge, meaning, and value in a world made up of elementary particles. Many would-be students of philosophy are unable to see why they need have views on that topic – why they need a metaphysics.

It was because Ortega found such topics profitless that he wrote polemical essays like the one from which Pinker quotes (“History as a System,” in Ortega’s Toward a Philosophy of History). There he said:

all the naturalist studies on man’s body and soul put together have not been of the slightest use in throwing light on any of our most strictly human feelings, on what each individual calls his own life, that life which, intermingling with others, forms societies, that in their turn, persisting, make up human destiny. The prodigious achievement of natural science in the direction of the knowledge of things contrasts brutally with the collapse of this same natural science when faced with the strictly human element.

Ortega insisted that increasing knowledge of how things such as the human brain and the human genome work will never help us figure out how to envisage ourselves and what to do with ourselves. Pinker thinks that he was wrong. But only a few pages of The Blank Slate grapple directly with this issue. Among those that do, the most salient are the ones in which Pinker argues that scientific discoveries give us reason to adopt what he calls “The Tragic Vision” rather than “The Utopian Vision” of human life – to take a dim view of the capacity of human beings to change themselves into new and better sorts of people.

In order to show that our choice between these two visions should be made by reference to science rather than to history, Pinker has to claim, cryptically, that “parts of these visions” consist of “general claims about how the mind works.” But that is just what historicist philosophers like Ortega doubt. They think that the contest between these two visions will be unaffected even if the brain turns out to work in some weird way that contemporary science has not yet envisaged, or if new fossil evidence shows that the current story about the evolution of our species is all wrong. Debates about what to do with ourselves, they say, swing as free from disagreements about the nature of neurons or about where we came from as they do from controversies about the nature of quarks or about the timing of the big bang.1

The issue Pinker has with Ortega, and with most philosophers outside the so-called analytic tradition, has nothing to do with blank slates. It is about whether the conversations among humanists about alternative self-images and alternative ideals would be improved if the participants knew more about what is going on in biology and cognitive science. Pinker argues that men and women with moral and political concerns have always relied upon theories of human nature, and that empirically based theories are now available. But Ortega would reply that for the last few hundred years we have learned to substitute historical narrative and utopian speculation for such theories.

This historicist turn does, however, owe a great deal to one particular scientist: Darwin. Darwin helped us stop thinking of ourselves as an animal body in which something extra, and specifically human, has been inserted – a mysterious ingredient whose nature poses philosophical problems. His critics said that he had reduced us to the level of the beasts, but in fact he let us see imaginative daring as a causal force comparable to genetic mutation. He reinforced the historicism of Herder and Hegel by letting us see cultural evolution as on a par with biological evolution – as equally capable of creating something radically new and better. He helped poets like Tennyson and Whitman, and thinkers like Nietzsche, H. G. Wells, George Bernard Shaw, and John Dewey, to dream of utopias in which human beings had become as wonderfully different from us as we are from the Neanderthals. The dreams of socialists, feminists, and others have produced profound changes in Western social life, and may lead to vast changes in the life of the species as a whole. Nothing that natural science tells us should discourage us from dreaming further dreams.

ENDNOTES

1 For more on this point, see my “The Brain as Hardware, Culture as Software,” Inquiry 47 (3) (June 2004): 219–235.