June 14, 2008

N Pepperell Has Some Things To Say About The Emergence Of Modernity. [UPDATED]

Filed under: Blogroll, History, Politics, Science, Social Theory, Vitiated by Ignorance — duncan @ 11:22 am

Okay. I’ve been spending really quite a lot of time recently talking with N Pepperell (of the Rough Theory blog) about, you know, Marx and stuff. (Conclusion, at least on my end: ‘Capital’ = work of genius, but WTF with the Hegel already?) I’ve found it all just incredibly illuminating and enjoyable. But – I guess unsurprisingly – it turns out that only a fraction of NP’s ideas actually make their way onto Rough Theory. So I’m going to perform a dubious public service, by trying to summarise one of NP’s claims. I put up endless apologies and qualifications for almost everything I post here: the coin of the realm has been sadly debased. But let me especially stress: my attempted summary is going to make complete nonsense of NP’s ideas. My sneaky plan is to force NP to jump into the comments box below to correct me – and thereby elaborate this stuff in person. The provocation, then, is as follows…

At some historical point I’m more or less vague about [since, unlike NP, I’m not the sort of person who walks into copyright libraries and says ‘bring me everything you’ve got from the thirteenth century’ ;-)] something peculiar happened. You get the emergence of 1) the natural sciences; and 2) the social sciences. Now – the social sciences proper don’t turn up until, like, the nineteenth century. And we’re talking more like the seventeenth century here, I think. [This is completely embarrassing – I know nothing; nothing – but with courage and fortitude in the face of humiliation I persist…] NP’s claim is that you start getting the theorisation of society in a way that wouldn’t have made much sense to, say, the scholastic philosophers – a theorisation that would eventually become, thanks to further historical shifts I’m unclear about, the tradition on which the social sciences proper draw. (I guess we’re talking Hobbes, here, or something.) And at more or less the same historical moment (17th century ish, I think) you get the beginnings of an obsessive search for regularity in nature.

Question: Why?

Well, I guess the standard answer – the answer I imbibed when studying A-level history (I got top marks folks! O yes…) – is the rise of the Enlightenment; the decline of arguments from authority; the death of dogmatism; the emergence of empiricism. When I was studying philosophy at uni, this stuff tended to be keyed to Descartes. Scepticism! The refusal to accept aught but personal judgement! The speech of the senses, not the dogma of the schools! It is, of course, a world-historical-class irony that Descartes’ sceptical method has become a canonical authority. An irony, indeed, that it was even communicated, if we take its actual claims seriously. But this is by the by. (I’m deep into personal preoccupations here; this has nothing to do with NP’s argument…)

Enlightenment not authority, yes? Fine. But this has some flaws, explanatory-power-wise. Because, first off, why the Enlightenment? And second off, why the emergence of the theorisation of society at around the same time? There’s no very obvious reason why natural science and the theorisation of society should go together, historically. And yet – apparently – they do.

NP’s answer: It’s about capitalism. Or, rather, it’s about the development of social structures that would make the emergence of capitalism possible. Specifically (I think): urbanisation; the movement from forms of communal organisation that are more or less personal in nature (small communities more predominant than large ones) to forms of communal organisation that require substantial mediation through impersonal structures if they are to function. Markets, I guess, in part – though NP more or less comes out in hives if you start reducing capitalism to markets. Plus more complicated things I’m in no position to gloss – stuff, I think, about the genealogy of the transformation of the concept of ‘value’ that NP’s been discussing in relation to Marx.

So – you get a reconfiguration of society. And this relates to the emergence of the category of the social. And this happens in a complex and interesting way. We’re getting to the actual content of NP’s claim now – which I’m more than a little nervous about fucking up. (It’s just inevitable.) But with the move to new and much more substantial forms of social mediation, you get a new form of sociality, which one could call (if one were in the mood ) impersonal sociality. NP has developed this idea in great detail in relation to Marx. (There NP calls it ‘real abstraction’). The point is that this is a form of sociality that can be decisively distinguished from any form of intersubjectivity. It is a form of sociality that need not be conscious; need not be meant. Now in a sense all forms of sociality possess this property, in spades. Any kind of interpersonal relation has countless features that are not present to the wakeful consciousness of the persons interrelating. (Freudian & Derridean that I am, I tend to think that such features of interpersonal relations are totally predominant; but let me stress again that I’m largely wittering on my own account here, not glossing NP). Nonetheless, with the emergence of large-scale, highly complex, highly mediated forms of social organisation, this attribute of sociality takes on a unprecedented power and prominence.

NP’s claim is that this new form of sociality is not theorised as sociality; not at the time, or for a long time after. On the contrary, this new form of sociality is theorised as natural. What is theorised as sociality is the intersubjectivity that suddenly becomes more accessible as a theoretical category because of its social differentiation from the ‘impersonally’ social. The new dominance of the impersonal social divides the social against itself. The social becomes: 1) the intersubjective (theorised as the new category of the social) and 2) the impersonally social (theorised as the natural).

And this social change is what produces the new categories of both the ‘social’ and the (law-like) ‘natural’. Intersubjectivity becomes available as an object of enquiry as never before – it becomes ‘relativised’ as social when it suddenly breaks away from a newly emergent other form of sociality. And at the same time, it becomes plausible to treat the ‘natural’ as organised on law-like principles, because the ‘impersonally’ social is being treated in this way. One could say that the impersonally social is naturalised and then projected onto the natural world (just as the political economists ‘naturalise’ the laws of political economy). But the claim isn’t that scientific endeavour is based on some misunderstanding or projection. The claim is just that people become familiar with the idea of treating a non-intersubjective, non-intentional ‘law’ as impacting their lives – because such ‘laws’ are produced by the new enacted mediations of the impersonal social realm. So it becomes intuitive to investigate nature itself for ‘natural’ laws… with all sorts of interesting results.

(There’s some connection, I guess, then, between what NP’s trying to do and the ‘strong program’ in sociology. The point is that even if we like some contingent historical project, we can’t use that as an explanation for its historical emergence. Regularities in nature themselves can’t provide an adequate explanation for the sudden desire to look for regularities in nature. Similarly, the real existence of ‘society’ can’t explain the emergence of this concept of society – a concept we can then reinscribe in our articulation of the concept’s emergence. When NP talks about ‘reflexivity’, the point is that we have to also give an account of the historical changes which produce the concepts we use to analyse those historical changes.)

Anyway – all this is no doubt a travesty of whatever NP actually thinks. So: let me end by quoting (as I like to) Wittgenstein – busy justifying the (as it turns out posthumous) publication of the ‘Philosophical Investigations’…

“Up to a short time ago I had really given up the idea of publishing my work in my lifetime. It used, indeed, to be revived from time to time: mainly because I was obliged to learn that my results (which I had communicated in lectures, typescripts and discussions), variously misunderstood, more or less mangled or watered down, were in circulation. This stung my vanity and I had difficulty in quieting it.”

I’m not planning to sting any vanity here. 🙂 But I hope these results, more or less mangled or watered down (and communicated in discussion) have some sort of provocative force. What’s the real deal, as regards this stuff, I wonder?

[So as I say in the comments below (and as I predicted in the post…) plenty of this misrepresents NP wildly. A few quick (attempted) corrections, then:

1) Not theorisation of society/nature. Rather, experience of society/nature.
2) Not just emphasis on natural law, but also an organicist vision of nature associated with romanticism.
3) A whole host of problems involving the characterisation of the ‘impersonally social’. Basically: the sort of things implied by the phrase ‘impersonally social’ (e.g. markets) are part of the intersubjectively social. The real ‘impersonally social’ (asocial social?) can’t be identified with institutions, but rather operates through them.
4) Strike the use of the phrase ‘real abstraction’ – which is relevant, but not like that.

Any better? Hum. Well I’m going to bed, anyway…]

April 17, 2008

Oh For Fuck’s Sake (Nassim Nicholas Taleb’s ‘Fooled By Randomness’)

Filed under: Philosophy, Sarcasm, Science, Self indulgence — duncan @ 9:44 pm

[A ranting post very much not worth your time, I’m afraid.]

I’m proud to say that I’ve been writing a deconstructionist-inclined blog for almost a year now, and have never once engaging in a bitter assault on the popular detractors of continental theory. You’ll notice that no interminable post excoriating Sokal and Bricmont has yet appeared. I am a saint.

On the other hand, I’ve just started reading Nassim Nicholas Taleb’s bestseller ‘Fooled By Randomness’. (In a no doubt misguided attempt to dip my toe into the glibber end of popular accounts of probability theory; I guess I should buckle down and read some real books.) As soon as I figured out the tone, I guessed that an ignorant and philistine invocation of Derrida as charlatan wouldn’t be far away. And, sure enough, on page seven (so soon!) we get an approving reference to a Ph.D. thesis in philosophy. “But not the Derrida continental style of incomprehensible philosophy (that is, incomprehensible to anyone outside of their ranks, like myself).” I gritted my teeth and continued. (Still no ranting blog post! I have the patience of Job!) But then on pages 72-3 we get this – and I boot up my computer.

“Increasingly, a distinction is being made between the scientific intellectual and the literary intellectual – culminating with what is called the ‘science wars’, plotting factions of literate nonscientists against literate scientists. The distinction between the two approaches originated in Vienna in the 1930s, with a collection of physicists who decided that the large gains in science were becoming significant enough to make claims on the field known to belong to the humanities… The Vienna Circle was at the origin of the development of the ideas of Popper, Wittgenstein (in his later phase), Carnap, and flocks of others.”

This is like shooting fish in a barrel. I mean – don’t you think the distinction between the scientific intellectual and the literary intellectual might have had some force before 1930s Vienna? Can the Vienna Circle be entirely accurately described as “a collection of physicists”? Does the later phase of Wittgenstein really originate there (even as a reaction against it)? Be all that as it may; next we get this:

“I suggest reading the hilarious Fashionable Nonsense by Alan Sokal [it’s just inevitable, this reference; the pages might as well be blank; we can fill them in ourselves]… (I was laughing so loudly and so frequently while reading it on a plane that other passengers kept whispering things about me) [Probably ‘what an arsehole’]… Science is method and rigour; it can be identified in the simplest of prose writing. For instance, what struck me while reading Richard Dawkins’ Selfish Gene is that, although the text does not exhibit a single equation, it seems as if it were translated from the language of mathematics.”

Superficial detractors of continental theory often invoke Dawkins as the exemplar of scientific rationality. Don’t get me started on him. (In a word, ‘The Selfish Gene’ is precisely not translated from the language of mathematics, because half the point of the thing is to develop a metaphor – a metaphor, of the ‘selfishness’ of the gene, which may or may not be helpful (and there’s a whole endless debate to be had about the validity of ascribing intentional states to apparently mindless objects, or to parts of/systems within organisms), but that only works as metaphor. Which isn’t to say that Dawkins doesn’t have strictly ‘scientific’ claims to make – but Dawkins himself is perfectly clear (in, for instance, the first chapter of ‘The Extended Phenotype‘) that ‘a change of aspect’, rather than a scientific hypothesis, is the main thing he hopes to advance in his popular science writing.) Anyway.

“[T]here is another, far more entertaining way to make the distinction between the babbler and the thinker. You can sometimes replicate something that can be mistaken for a literary discourse with a Monte Carlo generator but it is not possible randomly to construct a scientific one. Rhetoric can be constructed randomly, but not genuine scientific knowledge.”

If I understand him right, Taleb means, by “Monte Carlo generator”, a computer program that is capable of churning out vast numbers of imaginary events, according to a set of predetermined rules. I can’t pretend to understand [which is why I’m reading this stuff, after all] – with my knowledge of computers, it’s amazing this blog is still in one piece. But (in a fairly superficial way) what Taleb’s saying here is surely wrong. A ‘Monte Carlo generator’ can construct scientific knowledge – as Taleb has already told us.

“It is a fact that ‘true’ mathematicians do not like Monte Carlo methods. They believe that they rob us of the finesse and elegance of mathematics. They call it ‘brute force’. For we can replace a large portion of mathematical knowledge with a Monte Carlo simulator (and other computational tricks). For instance, someone with no formal knowledge of geometry can compute the mysterious, almost mystical, Pi.” (p. 47) If existing mathematical knowledge can be replicated in this way, I find it hard to believe that new mathematical – or scientific – knowledge can’t also be so produced. [Okay, I just did my googling. Wikipedia informs me that in mathematics “[t]he method is useful for obtaining numerical solutions to problems which are too complicated to solve analytically.” I need to learn about this sort of thing.] At any rate, the ability of ‘Monte Carlo generators’ to supply Taleb with knowledge and understanding seems to be the main reason he likes them so much.

Anyway. Next we get this:

“This is the application of Turing’s Test of artificial intelligence, except in reverse. What is the Turing test? [We get a description. Taleb continues:] The converse should be true. A human can be said to be unintelligent if we can replicate his speech by a computer, which we know is unintelligent, and fool a human into believing it was written by a human. Can one produce a piece of work that can be largely mistaken for Derrida entirely randomly?”

Well – let’s charitably put down to ‘humorous’ license Taleb’s ‘reversal’ of the Turing test. And lets ignore the fact that the so called ‘random’ production of any text is random only within incredibly limited bounds – most of the game’s effectiveness depends on the non-randomly selected phrases and rules for the combination of phrases that whatever program Taleb’s describing would consist in. (Just as Taleb’s method of ‘randomly’ computing the value of Pi isn’t random at all except in one of the program’s particular functions.) All that said – the answer to Taleb’s last question is: obviously yes. Of course you can ‘randomly’ produce a piece of text that can be mistaken for Derrida – by people who know fuck all about Derrida. In fact, I’d go further – if the program that produces phrases is sufficiently intelligently set up, I daresay I could be fooled by – or at least not confident in my judgement of the provenance of – some phrase or short sequence of phrases. At some point that would collapse – you’re not going to be able to generate an intelligible essay, or even a longish piece of text, using a ‘random’ method. (And if you can, maybe you should apply for that Turing Test prize money.) But I have no idea what Taleb thinks he’s demonstrating here.

“[T]here are Monte Carlo generators designed to structure such texts and write entire papers. Fed with ‘postmodernist’ texts, they can randomize phrases under a method called recursive grammar, and produce grammatically sound but entirely meaningless sentences that sound like Jacques Derrida, Camille Paglia, and such a crowd. Owing to the fuzziness of his thought, the literary intellectual can be fooled by randomness.”

What bullshit. What copper-plated, cast-iron, dug from a farmer’s prize bull’s ditch of prize bullshit bullshit. According to ‘Fortune’ magazine (I know, I shouldn’t expect much, why did I even buy the fucking thing?) ‘Fooled by Randomness’ is “One of the smartest books of all time.” Well, not so much. Not if it has stuff that even vaguely resembles this in it. Good lord. Why do people take this sort of thing seriously? What’s going on?

I was planning to write more, but I think I’ve reached a pitch of intemperance that requires a hasty close. Don’t buy ‘Fooled by Randomness’. I’ve got it here now, and I’m wondering whether to try to finish it or burn it. I guess I should toss a coin.

[Apologies for this nonsense post. Unusually, I have too much time on my hands today.]

November 24, 2007

Greg Mankiw on Social Mobility

Filed under: Economics, Politics, Science — duncan @ 8:12 pm

Greg Mankiw is one of the most widely read economics bloggers. He teaches at Harvard, and was chairman of Bush’s Council of Economic Advisers from 2003-2005. I’ve recently been looking at some of his old posts.

On April 27 2006 Mankiw discussed “intergenerational transmission of inequality.” In a talk to the Centre for American Progress, the economist Tom Hertz said: “the chances of [an American] getting rich are about 20 times higher if you are born rich than if you are born in a low-income family.” Mankiw expresses annoyance with Hertz’s liberal “spin”.

Mankiw: “One might ask why being born into a high-income family means you will likely have higher income. Is it the good genes that you inherited from your successful parents or the nice neighbourhood and expensive private schools that their high income could purchase for you? Is it nature or nurture?

The evidence suggests that nature trumps nurture.”

Mankiw refers to a study about adoption by the economist Bruce Sacerdote. [For some reason I can’t get the link to work – but you can download the study from Mankiw’s site.] Sacerdote studied 1117 families who adopted children through Holt International Children’s Services from 1970 to 1980. (The data was collected in 2003). He attempted to calculate “the transmission of income, education and health characteristics from adoptive parents to adoptees. I then compare these coefficients of transmission to the analogous coefficients for biological children in the same families, and to children raised by their biological parents in other data sets.”

Here’s the passage Mankiw quotes:

“Having a college educated mother increases an adoptee’s probability of graduating from college by 7 percentage points, but raises a biological child’s probability of graduating from college by 26 percentage points. In contrast, transmission of drinking and smoking behaviour from parents to children is as strong for adoptees as for non-adoptees. For height, obesity, and income, transmission coefficients are significantly higher for non-adoptees than for adoptees.”

As Sacerdote puts it, parents’ education has an “economically meaningful” effect on adoptees’ education. “Each additional year of mother’s educational attainment raises the adoptee’s educational attainment by .07 years. But the effects for adoptees are modest when compared with the corresponding effects for non-adoptees… [F]or educational outcomes, the level effects of parental education are quite important, but only about one quarter of the story.”

So ‘nature’ and ‘nurture’ both play a role in educational attainment – and ‘nature’ seems to play a larger role than one might expect. But Mankiw’s post focuses on income, and here Sacerdote’s conclusions are counter-intuitive. “The adoptee’s income appears to have almost no relationship to parental income.”

Mankiw: “Sacerdote suggests that income is like height. Having a tall father means you are likely to be tall, but it is because he has given you the tall gene, not because he has created an environment that fosters height. The same appears to be true of income.”


I hardly know where to start. Perhaps its worth mentioning that human growth is massively influenced by environmental factors. As Wikipedia puts it: “Genetics is a major factor in determining the height of individuals, though it is far less influential in regard to populations. Average height is increasingly used as a measure of the health and wellness (standard of living and quality of life) of populations. Attributed as a significant reason for the trend of increasing height in parts of Europe is the egalitarian populations where proper medical care and adequate nutrition are relatively equally distributed… Genetic potential plus nutrition minus stressors is a basic formula.”

So income is indeed a lot like height, in the sense that there’s a causal relationship between them. Living in “an environment that fosters height” – i.e. having a high standard of living – is going to influence how tall you are. Still, Mankiw and Sacerdote are only talking about U.S. families, so Mankiw’s shorthand is, perhaps, acceptable.

Far more dubious is Mankiw’s interpretation of Sacerdote’s paper. In the first place, Sacerdote does not discuss the range or distribution of the adopting parents’ incomes. Although Hertz makes strong claims about transmission of income levels from parents to children (“our parents’ income is highly predictive of our incomes as adults”), he also argues that this relationship is far stronger at the extremes. “Children born to the middle quintile of parental family income ($42,000 to $54,000) had about the same chance of ending up in a lower quintile than their parents (39.5 percent) as they did of moving to a higher quintile (36.5 percent).” The problem, put simply, is that the poor stay poor, while the rich stay rich. Within the broad range of middle-income families, the relationship between parent and child income is not strong.

We don’t know details of the incomes of the families in Hertz’s study. But we do know that Holt’s background check requires adopting families to have a “minimum income.” Sacerdote doesn’t say what this income threshold is – but it must surely exclude the low-income extreme at which, according to Hertz, the intergenerational transmission of income is strongest. In a word, adoption agencies have a responsibility to place children with families who can support them. Sacerdote’s study therefore cannot adequately incorporate many of the most important factors by which income inequality is perpetuated. (Not just factors directly related to income – Holt also runs a criminal record check, for instance, and requires parents to have been married for at least three years.) This is not “a data set in which adopted children were, literally, assigned randomly” (as Mankiw writes), because a number of strict criteria must be met before the random allocation of children to families can begin.

But even when this is taken into account, one would expect to find a relationship between the incomes of parent and adopted child. The apparent lack of such a relationship is particularly striking given the reported link between different generations’ educational attainment. If there really is no intergenerational transmission of income to adopted children, one would need an explanation for why the relationship you’d expect between income and education has here broken down.

As it happens, Sacerdote offers two such explanations. This result “could be driven by the restriction of range among Holt families, or by higher measurement error in my survey.” Higher measurement error, that is, than in similar surveys which do find a positive link between the incomes of parent and adopted child. In his Table 3a, Sacerdote compares “Transmission in Holt Sample Versus Transmission in Other Samples.”

Holt Adoptees:

Transmission of Years of Education: 0.069
Transmission of Income: -0.087

Swedish Adoptees:

Transmission of Years of Education: 0.144
Transmission of Income: 0.154

NLSY Adoptees:

Transmission of Years of Education: .277
Transmission of Income: .112

There are, of course, many possible reasons for the disparities between these figures. But it’s striking that Mankiw picks an apparently anomalous result as the basis for his post.

Sacerdote himself seems to favour “measurement error” over “restriction of range” as the explanation for his counter-intuitive income transmission result. “The lower income transmission in my sample is quite possibly driven by higher measurement error in my income survey question.” “Loss of parental income is not statistically significant in predicting child’s years of education, which may be a statement about the measurement error in my parental income variable.” But Mankiw doesn’t mention this possibility. For Mankiw, “Sacerdote suggests that income is like height.”

In other words, Mankiw is misrepresenting the paper he quotes. He is picking a result that conflicts with previous studies of intergenerational social mobility, and with common sense, in order to advance his political views. These views are, of course, that the poor are to blame for their poverty, while the rich deserve their wealth. Don’t talk about social justice: entrenched inequality has nothing to do with society.

Generally the right talks about weakness of character here. (See, for instance, Mankiw’s New York Times piece on American health-care. One reason for the failures of the American health care system is “the sexual mores of American youth”.) In this post Mankiw has an alternative explanation. It’s all in the genes.

[NB:  It turns out that Sacerdote’s original study prompted an absolute deluge of internet comment, which I’ve  just belatedly discovered.  If I get time I’ll try to see if any of it blows apart what I say above.]  [I never did get round to it.  If you spot any grievous flaws in the above, please leave a comment.]

August 11, 2007

The Immortality of Richard Dawkins

Filed under: Economics, Philosophy, Science — duncan @ 5:12 pm

There is a powerful tradition in Western philosophy that attempts to argue for the existence of an unchanging incorporeal world by starting from the structure of human consciousness.  For the transcendental thinkers the soul must be immortal, because the soul’s synthesising power constitutes the empirical world, and so must precede it.  But the idea isn’t limited to transcendental thought.  Perhaps the canonical exposition is Plato’s Symposium, where the goddess Diotima interrogates Socrates on the question of the Good.

Diotima:  Then may we state categorically that men are lovers of the good?
Socrates:  Yes, I said, we may.
:  And shouldn’t we add that they long for the good to be their own?
Socrates:  We should.
:  And not merely to be their own but to be their own forever?
:  Yes, that must follow.
:  In short, that Eros longs for the good to be his own forever?
:  Yes, I said, that’s absolutely true.

A number of ideas are here bound tightly together.  That which the soul desires, the soul desires forever, and all for itself.  If one desired something transient, then one could never possess it absolutely, for even one’s current possession of it would be shadowed by the inevitability of loss.  According to Plato’s / Diotima’s logic, desire – intentionality (and thus consciousness itself) – must aim at this absolute and permanent possession.  Anything less would be a rending of the self – and since a self that is constantly torn to pieces would be no self at all, the possibility of permanent possession of the good is a necessary condition for any consciousness at all.  If we are to think at all, we must be able to think the eternal. For deconstruction, what is at stake in our reading of the philosophical canon is an attempt to find an alternative to this logic.  The tradition says: eternity is a condition of consciousness.  Deconstruction replies: universal mortality is a condition of this thought of eternity.It’s impossible to really expand on this here – if you’re interested, I commend Henry Staten’s Eros in Mourning, the first chapter of which I’m paraphrasing.  But I want to say a few words about the way in which this argument informs the work of Richard Dawkins.  Dawkins – one of the most vocal atheists in the world – invokes at critical moments this same canonical logic of necessary immortality.  Even as he inveighs against the ‘delusions’ of those who cling to the idea of a world beyond the empirical, he is guided in his characterisation of the empirical world by a desire to locate a similarly ungainsayable foundation of consciousness.Earlier this year I read ‘The Extended Phenotype’.  Two themes: The ‘selfishness’ of the gene; the ‘immortality’ of the gene.  Here’s Dawkins in chapter five.  “The whole purpose of our search for a ‘unit of selection’ is to discover a suitable actor to play the leading role in our metaphors of purpose.  We look at an adaptation and want to say, ‘It is for the good of…’.  Our quest in this chapter is for the right way to complete that sentence.  … I am suggesting here that, since we must speak of adaptations as being for the good of something, the correct something is the active, germ-like replicator” (p. 91).  Dawkins’s idea of an “active, germ-like replicator” is pretty empty; given his woolly definitions (p. 83), the last quoted sentence is almost tautologous.  But that’s not the point.  The thrust of his idea is obvious: the possibility of an indefinitely long line of copies.  Since a gene, which is his central exemplar of an ‘active germ-line replicator’, is replicated (Dawkins argues) in all its essential properties, a gene can in some sense persist forever.  “Whether it succeeds in practice or not, any germ-line replicator is potentially immortal.  It ‘aspires’ to immortality but in practice is in danger of failing.” (p. 83).

A gene aspires to immortality.  This idea is at the heart of Dawkins’s form of evolutionary explanation.  We animals and humans live and die – any given death is contingent, but the fact of death is not.  We are mortal.  Genes, however, are not mortal.  They are, in principle, immortal; and if any gene happens to ‘die’, then this fate is itself contingent.

What interests me is the force with which Dawkins presents this idea as having explanatory value.  Dawkins believes that his perspective of the ‘selfish gene’ will provide explanatory insights that a rival perspective – of, say, the altruistic individual – will not.  I’m sure he’s right.  Yet Dawkins has remarkable confidence in the value of his perspective – a confidence that leads him to make such notorious statements as “We no longer have to resort to supersti- tion when faced with the deep problems: Is there a meaning to life? What are we for? What is man? After posing the last of these questions, the eminent zoologist G. G. Simpson put it thus:’The point I want to make now is that all attempts to answer that question before 1859 are worthless and that we will be better off if we ignore them completely.’“.  It seems to me that this confidence is based on the belief that a selfish aspiration to immortality fundamentally makes more sense as a motive force than, say, altruistic self-sacrifice.  This is why the explanatory chain runs from the ‘irrational’ behaviour of individuals to the ‘motives’ of the selfish genes.

Don’t get me wrong: I’m not objecting to evolutionary explanation; I’m not objecting to Dawkins’s form of evolutionary explanation.  But I’m disturbed by the bludgeoning force of Dawkins’ advocacy.  I don’t think this is just a product of his pugnacious personality.  Dawkins’s movement from altruistic mortal intentions to the gene’s selfish aspiration to immortality puts him in the mainstream of the Western philosophical and theological tradition.  Like Plato’s Diotima, Dawkins says that we cannot understand desire unless we can reduce it to the desire for eternity.

What’s this got to do with economics?  Well, a few posts ago I was discussing Arrow’s impossibility theorem.  I was saying (yet again) that the idea of a rationally self interested individual is ridiculous.  We are all composed of conflicting drives, and therefore Arrow’s argument about the impossibility of fully rational collective choice applies equally well to individuals.  I said that we don’t need Freud to tell us this: evolutionary biology will do the job just as well (and with sounder scientific credentials). 

But I want to distinguish two different levels of criticism here.  One level is the obvious foolishness of positing ‘rational self-interested individuals’ as economic actors.  But there is also the question of what constitutes ‘rational self-interest’ itself.  If we wished to criticise economics from the perspective of evolutionary psychology, we could replace classical economic explanations of human behaviour with accounts that emphasise the motives of the ‘selfish gene’.  (And I’m sure there are people on the case.)  But even if we did this (and it’s a worthwhile thing to do) we would remain within a certain form of metaphysical explanation: one that gives priority to self-sufficient self-interested atomic actors – and that ties the possibility of such actors to the possibility of rationality.  This is a philosophical inclination that cannot be empirically refuted or affirmed.  But, to my mind, it’s suspect..

When thinkers throughout history have examined consciousness, they have, often, reduced it to an eternal, unchanging, self-sustaining core.  One of Darwin’s achievements was – apparently – to do away with that: Darwinism was meant to be a Copernican revolution that undermined anthropic narcissism.  The theory of natural selection told us that our individual essence was contingent; by destroying the classical opposition between human and animal, it undermined the idea of an immortal human soul.  But the logic that underlies the desire to believe in an immortal soul cannot be destroyed by science: if human consciousness has been denied its special status, then a new consciousness can be invented – the consciousness of the gene – and all the old properties of the immortal soul can be safely ascribed to it.  In the name, ironically, of Darwinism. 

I obviously don’t want to overemphasise the similarities between Plato, Dawkins and rational choice theory.  God knows there are differences enough.  But there are also, I think, suggestive parallels.  A challenge to economics’ idea of rationality should try to take account, not only of the obvious empirical problems with economists’ theories, but also with the philosophical ideas behind them.  Kenneth Arrow, student of Socrates, stands in the market place and describes its workings.  The Goddess Diotima, with her invisible hand, guides his thoughts.

Create a free website or blog at