Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Wednesday, July 15, 2009

Neurology vs. Psychology

Wired has an article exploring the seemingly ill-formed question whether Body Integrity Identity Disorder is psychological or neurological in origin:
The disorder is the subject of a debate between psychiatrists and neuroscientists about whether the brain physiology causes the psychiatric condition or whether the causality runs in the other direction...

Columbia University psychiatrist Michael First helped pioneer the identification of the disorder and his latest research suggests it’s just a subset of a larger psychiatric condition in which people become fixated on being disabled. On the other hand, Paul McGeoch’s recent work... seems to explain the disorder as a purely neurological disease resulting from a malfunctioning right parietal lobule, which appears to maintain the mind’s body map. His lab used fMRI to determine that four self-reported BIID patients’ right parietal lobules didn’t light up when their unwanted limbs were touched. Normal people’s did.

"Oh, this is certainly a breakthrough. We were stunned by the results," David Brang, a graduate student who co-authored a paper on the study with McGeoch, said recently on the Australian television show on which Vickers told his amazing story. "It’s very clear that this is a neurological phenomenon when it always been thought of as a psychological issue."

I wonder what he means? Are psychological and neurological explanations supposed to be competing, mutually exclusive explanations? Psychological events are grounded in brain events, after all, so why don't these fMRI studies simply indicate how the psychological disorder is realized in the brain?
[Dr. First] is, however, not yet convinced that a deficit in the right parietal lobe causes BIID. It's also possible that a strong desire to amputate a limb could transform neural circuitry in a brain region responsible for body image, he says. "There's a chicken-and-egg problem here."

Or is it more of a chicken and atoms-arranged-chickenwise (or "forest and trees") problem? Perhaps we could pin down a point of substantive disagreement if we focused on a single 'level' of explanation, say the neural level, throughout. Perhaps the point is that the neurologists are claiming that the disorder has a simple neural manifestation, whereas the psychiatrists think that the neural manifestation will be much more complex (effectively claiming that deficits in the "brain region responsible for body image" were caused by prior neural events that are best integrated and understood if we 'zoom out' to the level of psychology).

Simply put: if your mind is not how it ought to be, then neither is your brain, since the one gives rise to the other. So every psychological disorder is, in some broad sense, also a neurological disorder. But we can still draw important distinctions here. In particular, a disorder may be apparent as such at the level of the brain, i.e. in a way that's recognizable when looking at it "as" a brain, using purely neurological vocabulary. Or the problem may instead reside in more complicated neural patterns that are better captured using psychological vocabulary. There's a real question which of these two levels of explanation better captures and unifies the relevant phenomena.

So we can understand 'psychology vs. neurology' debates substantively if they concern this question of what level of abstraction unifies the disorder. Are the causes of BIID alike in respect of their superficial neurological form, or must we pull back to the level of psychology before their commonality comes into view? In effect, we may then call 'neurological' the problems that have a relatively simple neural manifestation, and reserve the competing term 'psychological' for disorders that are more unified at the higher level of abstraction offered by psychology.

Does that sound right?

Tuesday, July 07, 2009

Wanting to Improve (but not artificially)

Robin Hanson recently wrote what strikes me as a rather misleading post, claiming that according to this paper, "the more people considered a feature to be a key part of their identity, the less they wanted to improve it." Various commenters on his blog offer cynical explanations for why this might be so, and other bloggers have since linked to Hanson's post, repeating the claim that "few people want to improve their empathy." Unfortunately (or perhaps fortunately), the scientific paper in question does not support this claim at all. Read the abstract:
Four studies examined young healthy individuals' willingness to take drugs intended to enhance various social, emotional, and cognitive abilities. We found that people were much more reluctant to enhance traits believed to be highly fundamental to the self (e.g., social comfort) than traits considered less fundamental (e.g., concentration ability)... Ad taglines that framed enhancements as enabling rather than enhancing the fundamental self increased people's interest in a fundamental enhancement, and eliminated the preference for non-fundamental over fundamental enhancements.

Now, while transhumanists may not think there's any normatively significant difference between 'artificial' enhancement and 'natural' improvement (through better nutrition, training, etc.), it must be acknowledged that the vast majority of people do see things differently here. So the mere fact that they aren't willing to take drugs to artificially enhance their empathy is not at all the same thing as not wanting to improve their empathy.

I don't see anything here to suggest that people wouldn't be willing to improve their empathy by (what they consider to be) more 'natural' means. (The paper even explicitly notes that people are happy to improve their empathy again so long as this is framed as "enabling" their true self to shine through, rather than externally imposing a new personality on them.) Am I missing something, or are some people just way too keen to be cynical?

Update: Note that according to Table 3 (at the end of the paper), only 25% of subjects reported that they "do not even wish to be better on this trait."

Monday, April 28, 2008

Culture is Biological

Ed Morrisey writes:
One of the stranger aspects of Jeremiah Wright’s speech came in the supposed neurological explanation of the differences between whites and blacks. Wright claims that the very structure of the brains of Africans differ from that of European-descent brains, which creates differences rooted in physiology and not culture...

Hilzoy responds by pointing out that Wright said no such thing. But this seems to me to miss the more fundamental error (which Hilzoy actually repeats in her post) of thinking that cultural and biological explanations are somehow alternative, competing, mutually exclusive explanations of behaviour.

Clearly, that's just plain mistaken. If culture influences our thoughts and behaviour, it must therefore affect our brains (that's where the thinking occurs, after all). All behavioural and psychological differences have a neuro-/biological explanation. The only question is whether this biological difference is in turn best explained by environmental or genetic differences.

Even this latter question of 'nature or nurture' is often confused. As I explain in my essay 'Native Empiricists', both inevitably play a role: we are equipped with innate capabilities to learn effectively from experience, and - on the other hand - one need only to deprive a plant of sunlight to see how a nourishing environment is essential for the expression of genetic potentials (e.g. height).

But, if we are careful, we can find coherent questions in this vicinity. For example, faced with a difference between A and B, we might wonder whether a genetic clone of A raised in B's environment would have ended up in a condition more like A's or B's in the relevant respects. Unfortunately, people are rarely so careful.

Wednesday, April 02, 2008

Overcoming Scientism

I take 'Scientism' to be the view that empirical inquiry is the only form of rational inquiry, perhaps coupled with the even stronger claim that only "scientific"/testable claims are meaningful, or candidates for truth or falsity. In other words, it is to dismiss the entire field of philosophy (and arguably logic and mathematics too, though this is less often acknowledged). Indeed, a primary symptom of scientism is that sufferers are incapable of distinguishing philosophical arguments from religious assertions. They claim not to comprehend any non-empirical claim; it is mere 'gobbledygook' to them.

It's worth noting right away that Scientism is self-defeating, for it is not itself an empirically verifiable thesis. Insofar as its proponents have any reasons at all for advancing the view, they are engaging in (bad) philosophical, not scientific, reasoning. This is the familiar point that one cannot assess philosophy (even negatively) without thereby engaging in philosophy oneself. [For a more positive argument in support of the a priori, see my post on conditionalizing out all empirical evidence.]

This bias against philosophy is unfortunate for the obvious reason that there are a lot of interesting and important philosophical truths, which the scientismist would never think to look for. (My original 'scientism' post quoted some ignorant dismissals of Nick Bostrom's very interesting 'Simulation argument'. Not that I think his conclusion is true; but it is eye-opening just to consider.) Moreover, as I once wrote:
All your “common sense” beliefs rest on philosophical assumptions. Most people prefer not to examine them, but that doesn’t mean they aren’t there. It just means that everything you think and do could be completely misguided and you wouldn’t even realize it.

The scientismist will no doubt have many false philosophical beliefs in addition to their scientism. (We all do.) But if they are unaware of the rational tools that allow us to identify and correct such errors, then they will be stuck with them -- not a situation that any dedicated truth-seeker would consider desirable.

I think it's especially unfortunate that most folk seem unaware that reasoned inquiry into normative questions -- e.g. ethics and political philosophy -- is possible. This is at least part of the explanation why public discourse on these matters is so impoverished and sub-rational. So I think it's very important for more people to appreciate that we can go beyond mere instrumental rationality and also assess one's ultimate ends in terms of rational coherence.

Scientism also leads to more mundane mistakes. For example, in a recent 'Overcoming Bias' thread, one commenter defended the common-sense view that different observers experience the same colour qualia (rather than my 'red' being your 'yellow'), on the grounds that the alternative claim is "purely metaphysical with no implications for reality". But that can't be the right reason, because the same could be said of the eminently reasonable -- and presumably true -- view that he was defending. Whether we experience the same qualia or different, either answer is "purely metaphysical" with no scientific implications. So the right justification for the former view must lie elsewhere (e.g. in philosophical principles of parsimony that count against drawing unmotivated ad hoc distinctions).

Fortunately, this bias is easily overcome. Accept no substitutes: Think!

[See also: The Problem with Non-Philosophers.]

Monday, March 24, 2008

Arguing with Eliezer: Part II

[My promised concluding thoughts...]

Clearly our disagreements run too deep to do full justice to them in a mere blog post. But I at least hope I've succeeded in indicating where one might reasonably depart from Eliezer's reductionist line. There are also a couple of ad hominem points which struck me as noteworthy. (See my previous post for real arguments; this is mere commentary.)

One is that our beliefs are shaped in reaction to others. Intelligent non-philosophers typically only come across stupid, woolly-headed non-reductionists. The most prominent public intellectuals are typically scientists of a reductionist bent, like Dawkins, whose most prominent opposition is from anti-intellectual rubes and intellectually bankrupt religious apologists. From a purely sociological perspective, it's no surprise that intelligent people might initially be drawn to the former camp. (I know I was.) But that's no substitute for assessing the strongest arguments -- the ones you've probably never even come across unless you've spent a few years doing academic philosophy, or associate with others who have -- on their merits.

Since Eliezer's posts are mostly directed at a general audience - most of whom have not carefully reflected on their beliefs - I agree with 99% of his criticisms. Folks often commit the mind-projection fallacy, are fooled by an empty dispute that 'feels' substantive, and can be irrationally resistant to perfectly legitimate scientific reductions. These are all important insights (though hardly news to philosophers). But he overgeneralizes -- just as to a man with a hammer, everything looks like a nail.

I think the core problem here is methodological. Eliezer assumes that a debunking explanation of a belief is enough to refute it. Rather than doing the hard work of philosophy -- assessing the arguments for and against P -- he shifts to cognitive science, explaining why I might offer such arguments even if P is false. But this is to commit the genetic fallacy. Any reflective non-reductionist will grant that you can explain all the physical facts (incl. their brain states and vocalizations) without reference to any non-physical facts. Of course. But that doesn't imply anything about whether their belief is justified. Explanation and justification are two completely different things.

Reductionists make this error because they assume that all that stands in need of explanation is the third-personal data of science. Hence (they assume), if you can explain all the empirical data - including the vocalizations of your critics - then there's nothing left for said critics to base their arguments on. This type of genetic fallacy is no fallacy, on this view, because a full empirical explanation exhausts all possible justification.

But this is clearly question-begging, or worse. It assumes an indefensible scientism from the start. Non-reductionists take it as given that there is more than just third-personal empirical data that calls out for explanation. There is the manifest fact of first-personal conscious experience, and the normative facts about what we ought to believe and do, etc. A debunking explanation of why we believe in these phenomena is not sufficient unless one has also successfully debunked the phenomena itself. But that requires actually engaging with non-reductionists and the arguments we offer, rather than simply psychologizing us.

Reductionists, when short on real arguments, like to appeal to meta-arguments, e.g. induction on the historical successes of science. 'There have always been nay-sayers, who questioned the ability of science to explain phenomenon X, and every time they've been proven wrong!' It's a familiar sentiment. But it's also pretty weak. If you bother to look more closely, there are principled reasons to think these cases different. All those examples they point to are instances of third-personal empirical phenomena. I grant that science is supreme in that domain. But, to turn the tables, it's never had any success outside of it. So there's no general reason to think that normativity or first-personal subjective experience are susceptible to purely scientific explanation. So, again, these simplistic meta-arguments are no substitute for the real thing.

Arguing with Eliezer: Part I

I had the pleasure yesterday of meeting Carl Shulman and Eliezer Yudkowsky (of Overcoming Bias fame) while they were in town, and hashing out some of our philosophical disagreements. It was interesting, because they're both very smart, and Eliezer's starkly materialist/reductionist ideology was shared by my past self. So I'm not entirely unsympathetic. But it was also frustrating in some respects, since he seemed to assume that any disagreement was simply due to a failure to appreciate his basic arguments, rather than a considered judgment that they aren't wholly compelling. So let me discuss a couple of issues in more detail, and attempt to lay out some of the reasons why I've shifted away from his blanket reductionism over the years.

(A) Fundamental Normativity. Eliezer holds that normative terms (e.g. 'should') are reducible to a particular framework of assessment -- roughly, the ultimate norms endorsed by the speaker. He calls this 'objective subjectivism', and it bears some similarity to the 'Objective Moral Relativism' I endorsed back in 2005.

I now find this unsatisfying, for several reasons. (1) The most obvious is that there's nothing really normative here, in the sense of an ideal that potentially outstrips any purely descriptive facts (incl. my current preferences and accepted norms). Though Eliezer wouldn't like to admit it, this is less a reduction than an elimination. Anti-realist maneuvers can save many of the appearances of normative practice, but its deepest aspirations are ultimately rejected. (2) His view implies that many normative disagreements are simply terminological; different people mean different things by the term 'ought', so they're simply talking past each other. This is a popular stance to take, especially among non-philosophers, but it is terribly superficial. See my 'Is Normativity Just Semantics?' for more detail. (3) We can go beyond the impoverished instrumental conception of rationality on which this view depends. Ultimate ends may themselves be assessed as more or less irrational. (I first realized this here.)

(B) Fundamental Mentality. My post on 'Dualist Explanations' outlines the case for property dualism, and defuses typical worries of the scientifically minded. Now, Eliezer seems to think that the causal inefficacy of non-physical phenomenal properties ("irreducible consciousness") is a knock-down argument against them. I once agreed, but again, have since changed my mind. My post, 'Why do you think you're conscious?' addresses this challenge in some detail.

There are some bullets to bite either way. I admit it's a bit odd to think that the words I type are not causally related to the facts I purport to describe. (That's an extreme way of putting it; do follow my above link to put this in perspective.) But, upon reflection, I find this commitment less absurd than denying the manifest reality of first-personal conscious experience (as reductive materialists like Dennett and Eliezer do), or engaging in the metaphysical contortions that non-reductive materialists must (see my 'dualist explanations' post).

(C) Epistemology. Eliezer writes:
When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

I responded:
It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence". For one thing, no amount of mere observation will suffice to bring us to a conclusion, as Lewis Carroll's tortoise taught us. Further, it mistakes content and vehicle. When I judge that p, and subsequently infer q, the basis for my inference is simply p - the proposition itself - and not the psychological fact that I judge that p. I could infer some things from the latter fact too, of course, but that's a very different matter.

In discussion, Eliezer emphasized the demands of (what I call) 'meta-coherence' between our first-order and higher-order beliefs. If you reason from p to q, but further believe that your reasoning in this instance was faulty or unreliable, then this should undermine your belief in q. I agree that reasoning presupposes that one's thought processes are reliable, and a subjectively convincing line of thought may be undermined by showing that the thinker was rationally incapacitated at the time (due to a deceptive drug, say). But presuppositions are not premises. So it simply doesn't follow that the following are equally good arguments:
(1) P, therefore Q
(2) If I were to think about it, I would conclude that Q. Therefore Q.

(Related issues are raised in my post on 'Meta-Evidence'. See also my argument for the inescapability of a priori justification.)

Concluding Remarks. Oops, this is too long already -- I've shifted my concluding thoughts to a new post.

Saturday, February 09, 2008

SEP on Evo Psyc

The latest addition to the Stanford Encyclopedia of Philosophy may be of general interest: "There is a broad consensus among philosophers of science that evolutionary psychology is a deeply flawed enterprise..."

Tuesday, January 01, 2008

Edge Question '08: Changing Minds

This year's question from Edge to various scientists is: 'What have you changed your mind about? Why?' I haven't read all the answers, but here are a few that stood out (some for being intrinsically interesting, and others which call out for philosophical attention).

1. Geoffrey Miller has become more optimistic about uncovering human nature, thanks to the realization that there are plenty of 'local experts' already in every walk of life:
Almost all of them know important things about human nature that behavioural scientists have not yet described, much less understood. Marine drill sergeants know a lot about aggression and dominance. Master chess players know a lot about if-then reasoning. Prostitutes know a lot about male sexual psychology. School teachers know a lot about child development. Trial lawyers know a lot about social influence. The dark continent of human nature is already richly populated with autochthonous tribes, but we scientists don't bother to talk to these experts.

2. Helena Cronin explains why she's come to see sex differences as better explained by differences in variance than in averages. (Now a familiar, if still underappreciated, theme.)

3. Jonathan Haidt defends sports and fraternities:
By the time I became a professor I had developed the contempt that I think is widespread in academe for any institution that brings young men together to do groupish things. Primitive tribalism, I thought. Initiation rites, alcohol, sports, sexism, and baseball caps turn decent boys into knuckleheads. I'd have gladly voted to ban fraternities, ROTC, and most sports teams from my university.

But not anymore. Three books convinced me that I had misunderstood such institutions because I had too individualistic a view of human nature.

Haidt sees modern Westerners as "bees without hives", lost in "a world so free that it [leaves] many of us gasping for connection, purpose, and meaning." But can't we find a way to forge meaningful connections without degrading into a knuckleheaded mob? Primitive tribalism might make us happy, or satisfy some urges of human nature, but I don't see that this makes it any less contemptible. (Not that I support "banning" anything, of course. If people want to degrade themselves, that's their prerogative. But I sure wouldn't want to encourage these cultural elements.)

4. Sherry Turkle has grown increasingly suspicious of society's love affair with technology:
A female graduate student came up to me after a lecture and told me that she would gladly trade in her boyfriend for a sophisticated humanoid robot as long as the robot could produce what she called "caring behavior." She told me that "she needed the feeling of civility in the house and I don't want to be alone." She said: "If the robot could provide a civil environment, I would be happy to help produce the illusion that there is somebody really with me." What she was looking for, she told me, was a "no-risk relationship" that would stave off loneliness; a responsive robot, even if it was just exhibiting scripted behavior, seemed better to her than an demanding boyfriend.

Isn't that what pets are for? What new concerns do robotic "friends" raise? (Is it just that the illusion of love may be more compelling, the pretense so satisfying a substitute that one is less motivated to pursue the real thing?)

5. Simon Baron-Cohen has a trite piece on "equality", or rather, the fact that people are not all the same. I can't discern any point to the article once we distinguish between qualitative (descriptive) vs. moral (normative) equality.

6. Finally, Thomas Metzinger offers his thoughts on moral philosophy. First, we have (what looks like) the gratuitous assumption of hedonism:
shouldn’t we have a new ethics of consciousness — one that does not ask what a good action is, but that goes directly to the heart of the matter, asks what we want to do with all this new knowledge and what the moral value of states of subjective experience is?

Why "states of subjective experience", rather than "states of affairs" more generally? (There's nothing new about getting to "the heart of the matter" in this way, of course; it's called 'value theory', and something consequentialists and others have been interested in for some time now!) Maybe Metzinger is making the more modest point that a growing ability to manipulate X-states provides us with reason to work out how to evaluate the various X-states. But that is not such a revolutionary suggestion.

And then the nihilism:
Here is where I have changed my mind. There are no moral facts. Moral sentences have no truth-values. The world itself is silent, it just doesn’t speak to us in normative affairs — nothing in the physical universe tells us what makes an action a good action or a specific brain-state a desirable one. Sure, we all would like to know what a good neurophenomenological configuration really is, and how we should optimize our conscious minds in the future. But it looks like, in a more rigorous and serious sense, there is just no ethical knowledge to be had. We are alone. And if that is true, all we have to go by are the contingent moral intuitions evolution has hard-wired into our emotional self-model.

This is a whopping non-sequitur. "We are alone", therefore no choices are more or less reasonable or worthy than any others. Huh. That's not any kind of logic I'm familiar with. (Though I suppose nihilists can't believe in good reasoning anyway, so maybe he's more consistent than I give him credit for.)

Seriously, though, why would you ever look to "the physical universe" for normative insight in the first place? Obviously that's not going to work. If there are moral truths at all, this will be due to the nature of rationality, not the nature of the world. And Metzinger hasn't offered any reasons at all for doubting that some ethical systems are appreciably more reasonable or coherent than others. He just asserts, without argument, that "all we have to go by are the contingent moral intuitions evolution has hard-wired into" us. Shoddy thinking.

Sunday, November 18, 2007

Expecting Immortality

"The experience of being dead should never be expected to any degree at all, because there is no such experience." So writes David Lewis in 'How Many Lives Has Schrodinger's Cat?' (p.17) But perhaps we can have negative expectations, in the sense of not positively expecting any further experiences.

What if I split, amoeba-style, into two future persons, one of whom is promptly shot? In general, I should split my expectations evenly between my future branches. Perhaps I should anticipate the experiences of each branch, separately. (I do not, of course, experience both lives together, in any mutually-accessible kind of way.) If one branch has no experiences at all - being killed before ever awakening, say - then all there is to anticipate is the other, surviving branch. But if it gets to experience a little before dying, then it would seem one allotment of my expectations should capture precisely this: to experience seeing the bullet approaching (or whatever), and then no more.

This is important because if the "no collapse" (or many-worlds) interpretation of Quantum Mechanics is correct, then we are actually splitting all the time. If you shoot me, there exists some (extremely low intensity) state in the superposition in which the bullet passes right through me. However often I die, there is always some version of me that survives. Should I expect, then, to be immortal? Lewis thinks so (p.19):
Suppose you're fairly sure that there are no collapses, and you're willing to run a risk in the service of truth. Go and wander about on a busy road, preferably a few minutes after closing time. When and if you find yourself still alive, you will have excellent evidence [for the no-collapse view]. If that's not yet enough to convince you, try the experiment a few more times.

What should you expect to happen?
(1) You miraculously emerge unscathed (perhaps the cars pass right through you by some quantum fluke).
(2) You get hit by a car, and then have no further experiences (due to being dead).
(3) You get hit by a car, but then miraculously survive (perhaps your crushed body reassembles itself by some quantum fluke).

On the no collapse view, all these outcomes occur, but (presumably) the second does so with much higher frequency or "intensity". Still, even if the overwhelmingly majority of superimposed states involve your getting hit by a car, you will always have some branch that survives this. So there are always further experiences to anticipate. Does this mean we should never anticipate ending up on a dead branch, i.e. one that has no further experiences?

It's not quite enough to ask whether we should expect to be immortal. On the no-collapse view, we are undoubtedly immortal, in the sense that we will always have some surviving branch. But we also die a lot, in that we have a great many terminating branches! So the real question is whether we should ever expect to die. Can our expectations be distributed over branches that contain no further experiences? Or are they restricted to the survivors only?

Sunday, September 30, 2007

Measuring Time

Is time immanent in the world - reducible, perhaps, to the ticking of an atomic clock - or transcendent, somehow beyond the physical universe? (One of my old Canterbury lecturers gave a great talk on this a couple of years back.) We seem pressed towards a kind of middle ground. No mere clock can be the ultimate standard of time, for a clock may slow down, and that does not mean that the rest of the universe has speed up! No, we take them to be measuring something beyond themselves. The same will be true of any local standard (e.g. the movement of the sun).

Markosian (1993) suggests:
[The change in the sun's position] is also meant to be a stand-in for a more important change, namely, the pure passage of time. Indeed, it seems that our assumption that the sun's position changes at a constant rate amounts to the assumption that the sun's position changes at the rate of fifteen degrees per hour, i.e., that every time the sun moves fifteen degrees across the sky, one hour of pure time passes. So it at least appears that what we are after in trying to determine the rates of various physical processes, such as Bikila's running of the marathon, are the rates at which those processes occur in comparison to the rate of the pure passage of time. (pp.840-1)

I hope we can come up with a better account of this appearance, since "the rate of the pure passage of time" is gibberish. But why should we interpret "fifteen degrees per hour" as relating two changes (the sun through space vs. the present through time)? It seems on the face of it to just be reporting a single change, i.e., that the sun moves fifteen degrees across the sky in the space of one hour. The hour doesn't have to move. Just the sun.

Perhaps the worry is that if time doesn't pass, then the standard of an 'hour' must be defined in terms of immanent physical changes (like the sun's movement, or a clock's ticking). But all measurement is like this. A clock is to time as a ruler is to space. Nobody takes this to mean that we need an objective 'here', extending over space at a rate of one meter per meter, to tell us how long a meter really is in case all our rulers suddenly shrink. Yet Markosian writes (p.841):
suppose that the pure passage of time thesis is false... if it should turn out one day that the motion of the sun in the sky appears to speed up drastically relative to other changes, then we should say, not that the motion of the sun has sped up drastically relative to the pure passage of time, while every other change has maintained its rate, but, rather, simply that the sun's motion has sped up relative to the other normal change.

Why can't we say that the sun has sped up drastically, not relative to any other rate, but just simpliciter? It is moving a greater distance in space for the same interval of time. Simple.

It seems like the real issue here is substantival vs. relational conceptions of space-time. If space-time is like a container, an objective thing in its own right, then universal shrinkage - or slowing - of its contents might be a coherent possibility (even if we couldn't recognize such an event from the inside). If they're merely relational, on the other hand, and so fundamentally about relative proportions, then the idea of all distances or durations universally increasing may make no sense, since to double each component is to leave the ratio the same. (Note that while this is a curious issue in its own right, it's nothing to do with the passage of time.)

In any case, if immanent relations are all that we have access to, we may wonder whether substantive, transcendent space-time could really matter. So it is worth seeking a plausible immanentist theory. We noted at the start that no local standard would do. But perhaps a global generalization would serve better. Plausibly, we seek a frame of reference that yields the greatest amount of stability in our general region. Relative to my heartbeat, the world is in a crazy flux. But my clock, and the sun's movement, and a whole cluster of other natural processes, can be interpreted as each holding a constant rate relative to each other. So we take this general cluster as our standard of time. Any one component may become out of sync with the rest, in which case we will judge it to have changed its pace. The stability of the cluster thus transcends each of its parts (considered individually), whilst remaining wholly immanent. That strikes me as providing as good a basis for measurement as one could hope for.

Wednesday, August 22, 2007

Gender as Cultural Specialization

Wow. I highly recommend Roy Baumeister's fascinating article, Is There Anything Good About Men? (Thanks, Luke, for the NYT link.) It has a relatively balanced and apolitical tone, but I wouldn't be surprised if it comes to be seen as the definitive rebuttal to ideological feminism:
[T]his is not about the “battle of the sexes,” and in fact I think one unfortunate legacy of feminism has been the idea that men and women are basically enemies. I shall suggest, instead, that most often men and women have been partners, supporting each other rather than exploiting or manipulating each other.

A stock data point for inferring "patriarchal oppression" is that men are disproportionately successful. The stock response, of course, is that men are also disproportionately failures. For basic evolutionary reasons (namely, greater variance in male reproductive potential), the Y chromosome is a gambler. Nothing new here, though Baumeister provides a droll summary:
[T]he pattern with mental retardation is the same as with genius, namely that as you go from mild to medium to extreme, the preponderance of males gets bigger. All those retarded boys are not the handiwork of patriarchy. Men are not conspiring together to make each other’s sons mentally retarded.

One important point he doesn't address is the empirical evidence of actual gender bias: identical papers are judged to be less brilliant if the author appears to be female. Orchestras using blind auditions (and hence judging solely on sound quality) hire more women than they otherwise would. And so on. But here's the key: this looks like anti-female bias because we're only looking at the top end. To know for sure, we'd also need to test judgments for bias at the bottom end. And I expect that what we'd find is exactly the opposite: terrible performers are judged to be even more useless if they are men. Judgments of women are biased towards the mean; judgments of men tend to be more polarized. There is no straightforward sense in which this makes men as a whole "privileged". They are both winners and losers; it's a trade-off, as Baumeister emphasizes throughout his article. (That's not to say that we shouldn't try to counteract the bias. I think we should - in both directions. But it's misleading to paint it as simple "patriarchal oppression.")

An interesting issue he does address is sex differences in social motivation as a key explanatory factor:
Women specialize in the narrow sphere of intimate relationships... Meanwhile the men favored the larger networks of shallower relationships. These are less satisfying and nurturing and so forth, but they do form a more fertile basis for the emergence of culture.

Note that all those things I listed — literature, art, science, etc — are optional. Women were doing what was vital for the survival of the species. Without intimate care and nurturance, children won’t survive, and the group will die out. Women contributed the necessities of life. Men’s contributions were more optional, luxuries perhaps. But culture is a powerful engine of making life better. Across many generations, culture can create large amounts of wealth, knowledge, and power. Culture did this — but mainly in the men’s sphere.

Thus, the reason for the emergence of gender inequality may have little to do with men pushing women down in some dubious patriarchal conspiracy. Rather, it came from the fact that wealth, knowledge, and power were created in the men’s sphere. This is what pushed the men’s sphere ahead. Not oppression.

The really interesting part of the article is how it extends the traditional evolutionary argument from biology to culture: "The group systems that used their men and women most effectively would enable their groups to outperform their rivals and enemies." A society exploits its individual members, but not necessarily all in the same way. Gender is thus seen as a kind of cultural specialization, a way to assign members to different tasks. Building on the biological dispositions, a culture gambles with its males. They do the most dangerous work, and are treated as most expendable. Again, this reveals how myopic it is to view gender through the lens of 'patriarchy', as Baumeister notes:
Any man who reads the newspapers will encounter the phrase “even women and children” a couple times a month, usually about being killed. The literal meaning of this phrase is that men’s lives have less value than other people’s lives. The idea is usually “It’s bad if people are killed, but it’s especially bad if women and children are killed.” And I think most men know that in an emergency, if there are women and children present, he will be expected to lay down his life without argument or complaint so that the others can survive. On the Titanic, the richest men had a lower survival rate (34%) than the poorest women (46%) (though that’s not how it looked in the movie). That in itself is remarkable. The rich, powerful, and successful men, the movers and shakers, supposedly the ones that the culture is all set up to favor — in a pinch, their lives were valued less than those of women with hardly any money or power or status. The too-few seats in the lifeboats went to the women who weren’t even ladies, instead of to those patriarchs.

Baumeister's central methodological advance is to explain the social construction of gender in terms of how it benefits the culture, rather than just how it benefits the men. He concludes:
What seems to have worked best for cultures is to play off the men against each other, competing for respect and other rewards that end up distributed very unequally. Men have to prove themselves by producing things the society values. They have to prevail over rivals and enemies in cultural competitions, which is probably why they aren’t as lovable as women.

The essence of how culture uses men depends on a basic social insecurity. This insecurity is in fact social, existential, and biological. Built into the male role is the danger of not being good enough to be accepted and respected and even the danger of not being able to do well enough to create offspring.

The basic social insecurity of manhood is stressful for the men, and it is hardly surprising that so many men crack up or do evil or heroic things or die younger than women. But that insecurity is useful and productive for the culture, the system.

Again, I’m not saying it’s right, or fair, or proper. But it has worked. The cultures that have succeeded have used this formula, and that is one reason that they have succeeded instead of their rivals.

Enough with the sociology. Supposing that those are the facts, how are we to evaluate them? Should we endorse the way that cultural evolution has shaped our gender norms, or try to overcome them? I lean towards the latter, but we will only succeed in this if we first recognize that there are two sides to every gender difference.

We recently discussed how women are able to use their sexuality to get attention, which is tied to the disadvantage of sexual harrassment. For another example, Infinite Injury discusses how the norms against female assertiveness are tied to norms which shield women from criticism. Thus he notes, "you can’t possibly hope to have combative conduct by women parsed the same way as combative conduct by men if men are supposed to pull their punches with women." Gender norms are double-edged, and do not simply advantage one gender to the detriment of the other. Once this fact is appreciated, we can ask the normative question: should we seek to rebalance in both directions, or neither?

Saturday, August 18, 2007

Scientism

Many otherwise-intelligent people have an unfortunate tendency to dismiss entire realms of inquiry out of hand. Perhaps the most common example of this is the failure to appreciate the possibility of a priori or non-scientific rational inquiry, i.e. philosophy. The prevalence of ignorant scientism in this thread (bashing Nick Bostrom's simulation argument) is remarkable -- though sadly not atypical.

One commenter suggests that an untestable hypothesis must consequently be classified as either 'myth' or 'garbage'. (He did not tell us how to test this very suggestion. I can only assume he was storytelling.) Another calls Bostrom's argument "pseudoscience gibberish". Yet another chimes in:
This is very much like saying the earth might really be only 3000 years old and $DEVIL just made it seem like its much older to fool everyone.

IOW, it is all hocus pocus claptrap what ifs and doesn’t belong in any science discussion.

The blogger (Peter Woit) himself writes:
I don’t see what the problem is with “lumping Bostrom’s ideas in with religion”. They’re not science and have similar characteristics: grandiose speculation about the nature of the universe which some people enjoy discussing for one reason or another, but that is inherently untestable, and completely divorced from the actual very interesting things that we have learned about the universe through the scientific method.

Really, if people can't tell the difference between a reasoned philosophical argument and random "hocus pocus" or religious proposals... well, let's just say it's further evidence of the urgent need for philosophical education in schools!

If you think that Bostrom's argument is flawed, then by all means put on your philosopher's hat and expose its errors. But this requires actually engaging with the argument. To dismiss it just because it didn't involve any labwork is the worst kind of scientism.

I should add a disclaimer. Sometimes people attack "scientism" when their real target is epistemic standards in general. (See the comments here, for example.) Not me. I'm all in favour of having rationally justified beliefs. What I'm attacking here is the lazy assumption that science is the only source of rational justification. This assumption is simply false (and indeed self-defeating). This should be too obvious for words, but apparently it needs to be said: rigorous philosophical argumentation can also provide rational support for a conclusion.

Hat-tip: Robin Hanson (who offers some incisive criticism of his own).

See also: Explaining Beliefs. (It's the same core issue, really: dogmatic dismissal is no replacement for reasoned inquiry. You can't tell whether a question is answerable until you try.)

Framing Altruism

Benoit Hardy-Vallée notes that adding the sentence “Note that your opponent relies on you” increases altruistic behaviour in the Dictator Game. He concludes:
What is surprising is not that subjects are sensible to certain moral-social cues, but that such a simple cue (7 words) is sufficient. The more we know about each other, the less selfish we are.

This particular experiment doesn't really seem to have anything to do with increased knowledge, though. It's more a matter of framing: we reduce selfishness by cuing the 'responsibility' schema, so that giving is seen in a more positive light. I expect just two words would in fact suffice: "Stinginess test". Or, in the opposite direction, "Dupe test" -- I bet that title would reduce altruism dramatically.

Wednesday, August 08, 2007

Darwinian Blinkers

Oh dear. Via Robin Hanson, a "moral puzzle":
Consider two men, A and B. Man A steals food because he’s starving to death, while Man B commits a rape because no woman will agree to have sex with him.

From a Darwinian perspective, the two cases seem exactly analogous. In both we have a man on the brink of genetic oblivion, who commandeers something that isn’t his in order to give his genes a chance of survival. And yet the two men strike just about everyone — including me — as inhabiting completely different moral universes. The first man earns only our pity. We ask: what was wrong with the society this poor fellow inhabited, such that he had no choice but to steal? The second man earns our withering contempt.

Befuddled by his genes-eye view, Scott asks: "can any of you pinpoint the difference between the two cases, that underlies our diametrically opposite moral intuitions?" Of the 80-odd responses, only two or three struck on the answer (though no-one listened): try looking at it from a human perspective.

Many noted the obvious point that rape generally inflicts far greater harm than stealing a loaf of bread. But this is an inessential point, as Robin notes: "it might help to imagine a society where the person who lost the food was also in some, though less, danger of starving. But even then food and sex seem to be treated differently."

A related reason is that - consequences aside - they're actually very different kinds of acts. It's misleading to describe both merely as an instance of "commandeer[ing] something that isn’t his", because very different kinds of 'ownership' are being violated. Our intuitions reflect the fact that material property rights are - in a sense - "socially constructed", and if not done right they may fail to yield genuine (reasonable) obligations. In any case, there's no question that the actual distribution of material wealth in the world is historically contingent. A person's self-ownership, by contrast, is a more essential matter. Rape is not just "theft of a body", but a deeply personal violation.

But the central mistake, I'd suggest, is to think that there's any relevant similarity between the motivations for either act. A person does not really act "in order to give his genes a chance of survival." This simply illustrates the all-too-common confusion of biological and psychological teleology. What matters for moral assessment are the real psychological motives of people, not the metaphorical "motives" we attribute to their genes.

From a person's perspective, then, the "analogy" is a non-starter. The starving man needs to eat in order to survive -- a likely precondition for realizing any of his other values. The vital importance of this is beyond question. The second man's "need" for sex is hardly comparable. (It's perfectly possible for the celibate to still lead worthwhile lives.) So, only one of them has a genuine need that could reasonably justify imposing such burdens on others.

It's worth emphasizing that genetic 'goals' don't really have any moral significance, as ethics is instead concerned with the welfare of persons (psychological beings). I'm amazed by how easily evolutionary psychology can lead otherwise intelligent people to lose sight of this basic fact.

But, this particular pseudo-puzzle aside, I do think Robin is right to note that "our concern about inequality is not very general": we focus almost entirely on material inequality, even though non-financial factors arguably have a greater impact on welfare once our basic needs have been met. Should we also be concerned about the distribution of popularity, status, attractiveness, charisma, etc.? How about discrimination due to eccentricity, social awkwardness, or simple introversion? (There's no denying it's an extrovert's world!) It's harder to imagine how to address these matters, I suppose...

Tuesday, July 17, 2007

Excessive Charity

This is funny. Three year old children assume that their knowledge is shared by everyone else (including God). Apparently, this has "led some researchers to conclude that children start out with an understanding of what a god-like, all-knowing perspective is like, and that for several years they mistakenly apply this to other people."

New research suggests that they also assume that everyone else (again, including God) shares their ignorance. I guess it's finally safe to conclude that children start out with an understanding of what their own perspective is like, and that for several years they mistakenly apply this to other people...

Wednesday, July 04, 2007

Indeterminate Identity and Abortion

In response to Edelman's claim that embryonic brain development "involves a dimension of randomness," Mark Vernon writes:
It is the suggestion that not everything is present in the zygote and that external forces subsequently act on the foetus to eventually create human individuality that leads to the conclusion that human life does not begin at conception or kick in at any one time.

I'm not convinced that Edelman's new biological theory adds anything new to the ethical debate. I mean, it's not exactly news that genetic determinism is false. And everyone else recognizes that "not everything is present in the zygote" -- Edelman's "stochastic processes" aside, how we develop as individuals will at least depend upon the cultural environment we're raised in, etc.

Perhaps the thought is that the potential variance here is so great that we're not talking about a single person developing in different ways; rather, the differences are so vast that they would amount to distinct people. That is, it's not just indeterminate how the fetus will develop; it's indeterminate who the fetus will grow to be. And, we might think, if the fetus' future identity is indeterminate, it cannot presently be identical to either person, and so presumably isn't a person at all.

But again, it's not clear how this argument depends on Edelman's theory, given that most of us already believe that a newborn has the potential to develop in very varied ways (within genetic constraints, just as Edelman grants).

In either case, it remains open to the pro-lifer to deny that even vast psychological differences entail that the possible future persons are numerically distinct. (They are simply alternative future states that the one person could grow into.) After all, the most coherent pro-lifers will be animalists about personal identity, in the sense that they identify human persons with human organisms -- and there's no real question that the latter come into existence at conception.

Am I missing something?

Friday, June 29, 2007

Dualist Explanations

Peter writes:
Chalmers posits that the non-physical mental properties parallel the information processing properties of the system. But if they parallel them perfectly, and thus explain the mind, why not just identify them? ... the dualist explanation posits something more than the materialist version of the same theory does: it must posit additional laws governing a new domain of mental stuff that makes it behave in this way and stick to the right sort of physical systems.

On the other hand, the (type B) materialist theory posits ad hoc 'strong necessities', which we have no independent reason to believe in. (Kripke's "necessary a posteriori" is no help.)

Consider the question: why aren't we non-conscious zombies, mere hunks of matter that exhibit complex behaviour without any "lights" on inside? The materialist answers that zombies are impossible; that consciousness is nothing above and beyond the complex arrangements of matter that our bodies (brains) comprise. But this strikes me as an unsatisfying explanation, that doesn't really do justice to the phenomena.

The dualist can do better. She may acknowledge the depth of the problem -- that consciousness is something new, something that goes beyond merely material properties. She can also acknowledge the modal fact that zombies (non-conscious physical duplicates of ourselves) are possible. So, rather than merely rebuffing the question "why aren't we zombies?" as empty or ill-formed, the dualist takes it seriously, and offers an answer:

The reason we're not zombies is because of the contingent natural laws that govern our universe. There are psycho-physical bridging laws, which ensure that matter gives rise to consciousness. (Note how intuitive this claim is: we think that consciousness emerges from the brain; not that it just is the brain!) The zombie world has no such bridging laws. Its laws are merely physical, so that brains and other matter causally interact without giving rise to genuine consciousness in addition. That's the difference.

Materialists can't explain this difference, because they don't take the zombie intuition seriously. Once the brain matter is there, they think that's all there is to consciousness -- there's nothing further to explain. Most of us think there is something still to be explained, and dualism can achieve this by positing bridging laws that cause 'mind' to emerge from 'matter'.

Even dualists can agree that in our world (i.e. given the actual laws of nature) complex brain states suffice for consciousness. The briding laws make zombies nomologically impossible. And that's all science is concerned with. As philosophers, though, we're interested in a broader sense of possibility, in which we can't just take the natural laws for granted. So, once our familiar psycho-physical bridging laws are taken away, we should ask: does matter alone suffice for consciousness? The zombie world demonstrates that the answer is no. Take away the bridging laws, and our physical stuff might no longer give rise to any conscious experiences.

In summary, it's worth emphasizing three points:

(1) Materialism - perhaps surprisingly - turns out to be theoretically extravagant, due to its modal ambitions. It posits 'strong necessities', which we have no independent reason to grant, and indeed goes against everything else we know in philosophy. Dualism is thus the more philosophically modest theory.

(2) Additional laws are worth positing, to explain why we're conscious rather than zombies. (The unsatisfying alternative is to merely dismiss the question.)

(3) Contrary to popular belief, dualism need not be in tension with science. It only diverges from materialism in its extra-nomological implications -- i.e. matters that concern philosophers, not scientists.

Monday, April 23, 2007

Does Philosophy Need Science?

I reckon not. (Well, perhaps in practice, e.g. as an imaginative aid, but not in principle.) Whenever you're tempted to appeal to empirical data, simply conditionalize it out and you can safely carry on philosophizing in the a priori realm of possibilities.

Of course, we may be most interested in actual-world problems, e.g. interpreting modern physics, addressing salient ethical and political issues, etc. But there doesn't seem any reason why they couldn't in principle be addressed just as well from an empirically neutral position which entertained our actual situation as a merely hypothetical scenario. Indeed, given sufficient imaginative and rational powers, the armchair philosopher (or even the disembodied, floating-in-the-void philosopher) should be capable of achieving a kind of "limited omniscience", knowing everything there is to know about the various possibilities (except for which one happens to be actual -- but never mind that one little fact).

It might be objected that science brings to light new possibilities that would otherwise seem inconceivable -- e.g. space-time relativity. But this is merely to note that experience is a useful imaginative aid; it plays no necessary role in the actual justification of our philosophical beliefs. Einstein's theory is enough; it need not be borne out by the empirical data. His conceptual scheme alone is enough to show how space and time could turn out not to be absolute. (Unless there's really a hidden contradiction in there, in which case ideal rational reflection should suffice to expose the impossibility.)

Am I wrong? (And do you have to conduct an experiment before you can tell?)
P.S. Thanks to Jack for getting me thinking about this topic.

Saturday, August 19, 2006

The Anthropic Principle

Dom Eggert writes:
[T]he likelihood that life will arise in this universe does not change even if there are millions of non-life-supporting universes "out there" (assuming that universes are closed systems that don't interfere with each other). If this reasoning is correct then, from our perspective, it is no more rational to believe that there are infinitely many universes than to believe there is just one--it's merely an aesthetic preference.

This strikes me as mistaken. It's true that the existence of other universes doesn't affect the probability of this universe - "u42" let's say - being capable of supporting life. But that isn't what needs explaining. We merely need to show how there could be some universe or other that supports life. The locative fact that it's ours in particular can be got for free, by appeal to the Anthropic Principle.

Compare: "Of all the planets in the universe, how is it that we ended up on one of the few capable of supporting life? Isn't this monumentally unlikely?" This sounds like a silly question. It's not as if we might instead have been asking the question from the blistered surface of Mercury. Living - and hence being somewhere capable of supporting life - is a precondition for even asking the question. Finding ourselves alive on a lifeless planet is not a possibility that ever needed to be ruled out.

A more troubling question would be: "How is it that there are any life-supporting planets at all?" For clearly the total absence of life is a coherent alternative, so we need some explanation of why that didn't come to be. (Otherwise we must appeal to brute chance or coincidence, but that isn't much of an explanation!) And here the appeal to multiple universes might help. If there are zillions of universes, the chance that life exists somewhere or other suddenly looks a lot more likely.

Note that once it has been explained how life could plausibly exist somewhere, it's no great mystery how come it exists in our universe in particular. Maybe it's unlikely that u42 would contain life, but we don't really care about u42. For explanatory purposes, we care about "the universe we're in", de dicto not de re. And the probability that whatever universe we're in contains life is pretty well certain.

Again, the Anthropic Principle by itself cannot solve the whole problem. It cannot address the question of why we (or life-supporting universes) exist at all. For that we need the multiverse hypothesis. The Anthropic Principle can only help with the locative question: presupposing that there are life-supporting locations, why do we find ourselves in one of them rather than somewhere else? Combine the two and we get a relatively satisfying explanation, I guess. (More so than any alternative I've yet come across, anyway.)

So, contra Dom, I think we could rationally believe the multiverse hypothesis, as an inference to the best explanation. We simply need to be clear on what it serves to explain. It does not explain why u42 supports life. That's not something I feel any need to explain. Rather, what the multiverse hypothesis initially explains is how any actual universe could support life. Conjoined with the anthropic principle, it can then explain why we find ourselves in a life-supporting universe.

(See also: Why does the universe exist?)

Saturday, December 31, 2005

The God Hypothesis

It's often claimed that theism is untestable, but that seems to me mistaken. Surely we should expect significant differences between an atheistic universe and one guided by a supreme being that is all-powerful, knowing, and benevolent. Indeed, this is precisely why the problem of evil is such a powerful anti-theistic argument: it rests on the idea that the world is not how we would expect it to be in light of such a being's existence. We would expect God to create the best possible world, which ours does not seem to be. That's one failed prediction for theism, and thus a count against the theory.

The argument from divine silence rests on a similar inference. If the Christian God existed, he would surely let us know this. Perhaps each Sunday he would light up the skies and speak to us in a booming voice, or something along those lines. But of course nothing like this actually occurs. At present, the epistemic situation of many individuals provides them with little or no reason to believe in God. This is not something we should expect to be the case if God really existed. (Indeed, I think it makes traditional Christianity completely ludicrous - see my argument from hell.) Thus we have a second failed prediction for Christianity, and another serious count against the God hypothesis.

Now, it must be granted that these failures do not conclusively falsify theism. But then, I'm not sure that empirical evidence can ever conclusively falsify anything. Suppose I posit the existence of a black hole near our solar system. Others might cast doubt on this by showing how the standard predictions we'd make from this hypothesis fail to match up with our observations of reality. But I could always respond by claiming a measurement failure on their part, effectively denying their observations; or I might suggest that they have misunderstood the nature of black holes, thereby disputing the legitimacy of their predictions; or I might simply say that extraordinary circumstances allow for the possibility of the black hole's existence no matter how unlikely it may seem given our evidence. (Evidence can be misleading; improbable events may still occur.) The same responses are open to the theist. But of course they are unconvincing in either case.

These problems might be avoided by reverting to an extremely weak notion of 'God' as a causally inert being that makes no difference to the universe. As in the case of my positing a causally inert blob that likewise 'is nowhere' and 'does nothing', we surely have no reason to believe in such a pointless entity. Ockham's razor can safely shear it away.

But I'm sure most theists do not conceive of their God in such a useless way. So Ockham's Razor, or complaints about 'unfalsifiability', should not be the atheist's first line of attack. I think much stronger arguments can be made by taking the God hypothesis seriously, i.e. as making testable predictions about the world, and then pointing out how dismally those predictions line up with reality.