Wednesday, July 23, 2014

Wildcard Skepticism

Might there be excellent reasons to embrace radical skepticism, of which we are entirely unaware?

You know brain-in-a-vat skepticism -- the view that maybe last night while I was sleeping, alien superscientists removed my brain, envatted it, and are now stimulating it to create the false impression that I'm still living a normal life. I see no reason to regard that scenario as at all likely. Somewhat more likely, I argue -- not very likely, but I think reasonably drawing a wee smidgen of doubt -- are dream skepticism (might I now be asleep and dreaming?), simulation skepticism (might I be an artificial intelligence living in a small, simulated world?), and cosmological skepticism (might the cosmos in general, or my position in it, be radically different than I think, e.g., might I be a Boltzmann brain?).

"1% skepticism", as I define it, is the view that it's reasonable for me to assign about a 1% credence to the possibility that I am actually now enduring some radically skeptical scenario of this sort (and thus about a 99% credence in non-skeptical realism, the view that the world is more or less how I think it is).

Now, how do I arrive at this "about 1%" skeptical credence? Although the only skeptical possibilities to which I am inclined to assign non-trivial credence are the three just mentioned (dream, simulation, and cosmological), it also seems reasonable for me to reserve a bit of my credence space, a bit of room for doubt, for the possibility that there is some skeptical scenario that I haven't yet considered, or that I've considered but dismissed and should take more seriously than I do. I'll call this wildcard skepticism. It's a kind of meta-level doubt. It's a recognition of the possibility that I might be underappreciating the skeptical possibilities. This recognition, this wildcard skepticism, should slightly increase my credence that I am currently in a radically skeptical scenario.

You might object that I could equally well be over-estimating the skeptical possibilities, and that in recognition of that possibility, I should slightly decrease my credence that I am currently in a radically skeptical scenario; and thus the possibilities of over- and underestimation should cancel out. I do grant that I might as easily be overestimating as underestimating the skeptical possibilities. But over- and underestimation do not normally cancel out in the way this objection supposes. Near confidence ceilings (my 99% credence in non-skeptical realism), meta-level doubt should tend overall to shift one's credence down.

To see this, consider a cartoon case. Suppose I would ordinarily have a 99% credence that it won't rain tomorrow afternoon (hey, it's July in southern California), but I also know one further thing about my situation: There's a 50% chance that God has set things up so that from now on the weather will always be whatever I think is most likely, and there's a 50% chance that God has set things up so that whenever I have an opinion about the weather he'll flip a coin to make it only 50% likely that I'm right. In other words, there's a meta-level reason to think that my 99% credence might be an underestimation of the conformity of my opinions to reality or equally well might be an overestimation. What should my final credence in sunshine tomorrow be? Well, 50% times 100% (God will make it sunny for me) plus 50% times 50% (God will flip the coin) = 75%. In meta-level doubt, the down weighs more than the up.

Consider the history of skepticism. In Descartes's day, a red-blooded skeptic might have reasonably invested a smidgen more doubt in the possibility that she was being deceived by a demon than it would be reasonable to invest in that possibility today, given the advance of a science that leaves little room for demons. On the other hand, a skeptic in that era could not even have conceived of the possibility that she might be an artificial intelligence inside a computer simulation. It would be epistemically unfair to such a skeptic to call her irrational for not considering specific scenarios beyond her society's conceptual ken, but it would not be epistemically unfair to think she should recognize that given her limited conceptual resources and limited understanding of the universe, she might be underestimating the range of possible skeptical scenarios.

So now us too. That's wildcard skepticism.

[image source]

Eric Kaplan's Blog

Eric Kaplan, who overlapped with me in grad school at Berkeley but who is now much more famous as a comedy writer for Big Bang Theory, Futurama, and several other shows, has been cooking up weird philosophical-comical blog posts since March at his Wordpress blog here.

Check it out!

Wednesday, July 16, 2014

Tononi's Exclusion Postulate Would Make Consciousness (Nearly) Irrelevant

One of the most prominent theories of consciousness is Guilio Tononi's Integrated Information Theory. The theory is elegant and interesting, if a bit strange. Strangeness is not necessarily a defeater if, as I argue, something strange must be true about consciousness. One of its stranger features is what Tononi calls the Exclusion Postulate. The Exclusion Postulate appears to render the presence or absence of consciousness almost irrelevant to a system's behavior.

Here's one statement of the Exclusion Postulate:

The conceptual structure specified by the system must be singular: the one that is maximally irreducible (Φ max). That is, there can be no superposition of conceptual structures over elements and spatio-temporal grain. The system of mechanisms that generates a maximally irreducible conceptual structure is called a complex... complexes cannot overlap (Tononi & Koch 2014, p. 5).
The basic idea here is that conscious systems cannot nest or overlap. Whenever two information-integrating systems share any parts, consciousness attaches to the one that is the most informationally integrated, and the other system is not conscious -- and this applies regardless of temporal grain.

The principle is appealing in a certain way. There seem to be lots of information-integrating subsystems in the human brain; if we deny exclusion, we face the possibility that the human mind contains many different nesting and overlapping conscious streams. (And we can tell by introspection that this is not so -- or can we?) Also, groups of people integrate information in social networks, and it seems bizarre to suppose that groups of people might have conscious experience over and above the individual conscious experiences of the members of the groups (though see my recent work on the possibility that the United States is conscious). So the Exclusion Postulate allows Integrated Information Theory to dodge what might otherwise be some strange-seeming implications. But I'd suggest that there is a major price to pay: the near epiphenomenality of consciousness.

Consider an electoral system that works like this: On Day 0, ten million people vote yes/no on 20 different ballot measures. On Day 1, each of those ten million people gets the breakdown of exactly how many people voted yes on each measure. If we want to keep the system running, we can have a new election every day and individual voters can be influenced in their Day N+1 votes by the Day N results (via their own internal information integrating systems, which are subparts of the larger social system). Surely this is society-level information integration if anything is. Now according to the Exclusion Postulate, whether the individual people are conscious or instead the societal system is conscious will depend on how much information is integrated at the person level vs. the societal level. Since "greater than" is sharply dichotomous, there must be an exact point at which societal-level information integration exceeds the person-level information integration. Tononi and Koch appear to accept a version of this idea in 2014, endnote xii [draft of 26 May 2014]. As soon as this crucial point is reached, all the individual people in the system will suddenly lose consciousness. However, there is no reason to think that this sudden loss of consciousness would have any appreciable effect on their behavior. All their interior networks and local outputs might continue to operate in virtually the same way, locally inputting and outputting very much as before. The only difference might be that individual people hear back about X+1 votes on the Y ballot measures instead of X votes. (X and Y here can be arbitrarily large, to ensure sufficient informational flow between individuals and the system as a whole. We can also allow individuals to share opinions via widely-read social networks, if that increases information integration.) Tononi offers no reason to think that a small threshold-crossing increase in the amount of integrated information (Φ) at the societal level would profoundly influence the lower-level behavior of individuals. Φ is just a summary number that falls out mathematically from the behavioral interactions of the individual nodes in the network; it is not some additional thing with direct causal power to affect the behavior of those nodes.

I can make the point more vivid. Suppose that the highest-level Φ in the system belongs to Jamie. Jamie has a Φ of X. The societal system as a whole has a Φ of X-1. The highest-Φ individual person other than Jamie has a Φ of X-2. Because Jamie's Φ is higher than the societal system's, the societal system is not a conscious complex. Because the societal system is not a conscious complex, all those other individual people with Φ of X-2 or less can be conscious without violating the Exclusion Postulate. But Tononi holds that a person's Φ can vary over the course of the day -- declining in sleep, for example. So suppose Jamie goes to sleep. Now the societal system has the highest Φ and no individual human being in the system is conscious. Now Jamie wakes and suddenly everyone is conscious again! This might happen even if most or all of the people in the society have no knowledge of whether Jamie is asleep or awake and exhibit no changes in their behavior, including in their self-reports of consciousness.

More abstractly, if you are familiar with Tononi's node-network pictures, imagine two very similar largish systems, both containing a largish subsystem. In one of the two systems, the Φ of the whole system is slightly less than that of the subsystem. In the other, the Φ of the whole system is slightly more. The node-by-node input-output functioning of the subsystem might be virtually identical in the two cases, but in the first case, it would have consciousness -- maybe even a huge amount of consciousness if it's large and well-integrated enough! -- and in the other case it would have none at all. So its consciousness or lack thereof would be virtually irrelevant to its functioning.

It doesn't seem to me that this is a result that Tononi would or should want. If Tononi wants consciousness to matter, given the Exclusion Postulate, he needs to show why slight changes of Φ, up or down at the higher level, would reliably cause major changes in the behavior of the subsystems whenever the Φ(max) threshold is crossed at the higher level. There seems to be no mechanism that ensures this.

Thursday, July 10, 2014

Confessional Philosophy (repost)

I'm in Florida with glitchy internet and a 102-degree fever, so now seems like a good day to fall back on the old blogger's privilege of a repost from the past (Sept 15, 2009).


Usually, philosophy is advocacy. Sometimes it's disruption without a positive thesis in mind. More rarely, it's confession.

The aim of the confessional philosopher is not the same as that of someone who confesses to a spouse or priest, nor quite the same (though perhaps closer) as that of a confessional poet. It is rather this: to display oneself as a model of a certain sort of thinking, while not necessarily endorsing that style of thinking or the conclusions that flow from it. Confessional philosophy tends to center on skepticism and sin.

Consider, in Augustine's Confessions the famous discussion of stealing pears, wherein Augustine displays the sinful pattern of his youthful mind. Augustine's aim is not so much, it seems to me, to advocate a certain position (such as that sinful thoughts tend to take such-and-such a form) as to offer the episode for contemplation by others, with no pre-packaged conclusion, and perhaps also to induce humility in both the reader and himself. He offers an analysis of his motives -- that he was trying to simulate freedom by getting away with something forbidden (which would fit with his general theory of sin, that it involves trying to possess something that can only be given by god) -- but then he undercuts that analysis by noting that he would definitely not have stolen the pears alone. Was it then that he valued the camraderie of his sinful friends? He rejects that explanation also -- "that gang-mentality too was a nothing" -- and after waffling over various possibilities he concludes "It was a seduction of the mind hard to understand.... Who can unravel this most snarled, knotty tangle?" (4th c. CE/1997, p. 72-73)

Descartes's Meditations, especially the first two, are presented as confessional -- perhaps partly to display an actual pattern in his past thinking, but perhaps also partly as a pose. Here we see or seem to see the struggles and confusions of a man bent on finding a secure foundation for his thought. Hume's skeptical conclusion to Book One of his Treatise seems to me more genuinely confessional, when he asks how he can dare to "venture upon such bold enterprizes when beside those numberless infirmities peculiar to myself, I find so many which are common to human nature" (1739/1978, p. 265). "The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning.... I dine, I play a game of back-gammon, I converse, and am merry with my friends; and when after three or four hours' amusement I wou'd return to these speculations, they appear so cold, and strain'd, and ridiculous, that I cannot find in my heart to enter them and farther (p. 268-269). We see how the skeptic writhes. Hume displays his pattern of skeptical thought, but offers no way out, nor chooses between embracing his skeptical arguments and rejecting them. Nonetheless, in books two and three he's back in the business of philosophical argumentation.

Generally, it's better to offer a tight, polished exposition or argument than to display one's thoughts, errors, and uncertainties. That partly explains the rarity of confessional philosophy. But sometimes, no model of error or uncertainty will serve better than oneself.

[for some discussion, see the comments section of the original post]

Monday, June 30, 2014

SpaceTimeMind Podcasts: Alien and Machine Minds, Death and Logic

A couple of months ago, I had some great fun chatting with Richard Brown and Pete Mandik at SpaceTimeMind. Pete has now edited our conversation into two podcasts in their engaging, energetic style:

Part One: Death and Logic

Part Two: Alien and Machine Minds

The episodes are free-standing, so if the topic of Part Two interests you more, feel free to skip straight to it. There will be a few quick references back to our Part One discussion of modality and hypotheticals, but nothing essential.

Although I think Part Two is a very interesting conversation, I do have one regret about it: It took me so long to gather Richard's view about alien consciousness that I didn't manage to articulate very well my reasons for disagreeing. Something early in the conversation led me to think that Richard was allowing that probably there are (somewhere in the wide, wide universe) aliens constructed very differently from us, without brains, who have highly sophisticated behavior -- behavior as sophisticated as our own -- and that his view is that such beings have no conscious experience. By the end of the episode, it became clear to me that his view, instead, is that there probably aren't such beings (but if there were, we would have good reason to regard them as conscious). He offered empirical evidence for this conclusion: that all beings on Earth that are capable of highly sophisticated behavior have brains like ours.

If I had understood his view earlier in the conversation, I might have offered him something like this reply:

(1.) Another possible explanation for the fact that all (or most?) highly intelligent Earthlings have brains structured like ours is that we share ancestry. It remains open that in a very different evolutionary context, drawing upon different phylogenetic resources, a very different set of structures might be able to ground highly intelligent (e.g. sophisticated linguistic, technology-building) behavior.

(2.) Empirical evidence on Earth suggests that at least moderately complicated systems can be designed with very different material structures (e.g., gas vs. battery cars, magnetic tape drives vs. laser drives; insect locomotion vs. human locomotion). I see no reason not to extrapolate such potential diversity to more complex cognitive systems.

(3.) If the universe is vast enough -- maybe even infinite, as many cosmologists now think -- then even extremely low probability events and systems will be actualized somewhere.

Anyhow, Richard and Pete's podcasts have a great energy and humor, and they dive fearlessly into big-picture issues in philosophy of mind. I highly recommend their podcasts.

(For Splintered Mind readers more interested in moral psychology, I recommend the similarly fun and fearless Very Bad Wizards podcast with David Pizarro and (former Splintered Mind guest blogger) Tamler Sommers.)

Monday, June 23, 2014

The Calibration View of Moral Reflection

Oh, when the saints go marching in
Oh, when the saints go marching in
Lord, I want to be in that number
When the saints go marching in.
No. No you don't, Louis. Not really.

If you want to be a saint, dear reader, or the secular equivalent, then you know what to do: Abandon those selfish pleasures, give your life over to the best cause you know (or if not a single great cause then a multitude of small ones) -- all your money, all your time. Maybe you'll misfire, but at least we'll see you trying. But I don't think we see you trying.

Closer to you what you really want, I suspect, is this: Grab whatever pleasures you can here on Earth consistent with just squeaking through the pearly gates. More secularly: Be good enough to meet some threshold, but not better, not a full-on saint, not at the cost of your cappuccino and car and easy Sundays. Aim to be just a little bit better, maybe, in your own estimation, than your neighbor.

Here's where philosophical moral reflection can come in very handy!

As regular readers will know, Joshua Rust and I have done a number of studies -- eighteen different measures in all -- consistently finding that professors of ethics behave no morally better than do socially similar comparison groups. These findings create a challenge for what we call the booster view of philosophical moral reflection. On the booster view, philosophical moral reflection reveals moral truths, which the person is then motivated to act on, thereby becoming a better person. Versions of the booster view were common in both the Eastern and the Western philosophical traditions until the 19th century, at least as a normative aim for the discipline: From Confucius and Socrates through at least Wang Yangming and Kant, philosophy done right was held to be morally improving.

Now, there are a variety of ways to duck this conclusion: Maybe philosophical ethics neither does nor should have any practical relevance to the philosophers expert in it; or maybe most ethics professors are actually philosophizing badly; or.... But what I'll call the calibration view is, I think, among the more interesting possibilities. On the calibration view, the proper role of philosophical moral theorizing is not moral self-improvement but rather more precisely targeting the (possibly quite mediocre) moral level you're aiming for. This could often involve consciously deciding to act morally worse.

Consider moral licensing in social psychology and behavioral economics. When people do a good deed, they then seem to behave worse in follow-up measures than people who had no opportunity to do a good deed first. One possible explanation is something like calibration: You want to be only so good and not more. A unusually good deed inflates you past your moral target; you can adjust back down by acting a bit jerkishly later.

Why engage in philosophical moral reflection, then? To see if you're on target. Are you acting more jerkishly than you'd like? Seems worth figuring out. Or maybe, instead, are you really behaving too much like a sweetheart/sucker/do-gooder and really you would feel okay taking more goodies for yourself? That could be worth figuring out, too. Do I really need to give X amount to charity to be the not-too-bad person I'd like to think I am? Could I maybe even give less? Do I really need to serve again on such-and-such worthwhile-but-boring committee, or to be a vegetarian, or do such-and-such chore rather than pushing it off on my wife? Sometimes yes, sometimes no. When the answer is no, my applied philosophical moral insight will lead me to behave morally worse than I otherwise would have, in full knowledge that this is what I'm doing -- not because I'm a skeptic about morality but because I have a clear-eyed vision of how to achieve exactly my own low moral standards and nothing more.

If this is right, then two further things might follow.

First, if calibration is relative to peers rather than absolute, then embracing more stringent moral norms might not lead to improvements in moral behavior in line with those more stringent norms. If one's peers aren't living up to those standards, one is no worse relative to them if one also declines to do so. This could explain the cheeseburger ethicist phenomenon -- the phenomenon of ethicists tending to embrace stringent moral norms (such as that eating meat is morally bad) while not being especially prone to act in accord with those stringent norms.

Second, if one is skilled at self-serving rationalization, then attempts at calibration might tend to misfire toward the low side, leading one on average away from morality. The motivated, toxic rationalizer can deploy her philosophical tools to falsely convince herself that although X would be morally good (e.g., not blowing off responsibilities, lending a helping hand) it's really not required to meet the mediocre standards she sets herself and the mediocre behavior she sees in her peers. But in fact, she's fooling herself and going even lower than she thinks. When professional ethicists behave in crappy ways, such mis-aimed low-calibration rationalizing is, I suspect, often exactly what's going on.

Tuesday, June 17, 2014

Against Those Year-End Faculty Meetings to Discuss the Graduate Students

Every year's end at UC Riverside, the philosophy faculty meet for three hours "to discuss the graduate students". Back in the 1990s when I was a grad student, I seem to recall the Berkeley faculty doing the same thing. The practice appears to be fairly widespread. After years of feeling somewhat uncomfortable with it, I've tentatively decided I'm opposed. I'd be interested to hear from others with positive or negative views about it.

Now, there are some good things about these year-end meetings. Let's start with those.

At UCR, the formal purpose of the meeting is to give general faculty input to the graduate advisor, who can use that input to help her advising. The idea is that if the faculty as a whole think that a student is doing well and on track, the graduate advisor can communicate that encouraging news to the student; and also, when there are opportunities for awards and fellowships, the graduate advisor can consider those highly regarded students as candidates. And if the faculty as a whole think that a student is struggling, the faculty can diagnose the student's weaknesses and help the graduate advisor give the student advice that might help the student improve. Hypothetical examples (not direct quotes): "Some faculty were concerned about your inconsistent attendance at seminar meetings." "The sense of the faculty is that while you have considerable promise, your writing would be improved if you were more charitable toward the views of philosophers you disagree with."

Other benefits are these: It helps the faculty gain a sense of the various graduate students and how they are doing, presumably a good thing. If a student has struggled in one of your classes but seems to be well regarded by other faculty, that can help you see the student in a better light. It's an opportunity to correct misapprehensions. In the rare case of a student with very serious problems (e.g., mental health issues), it can sometimes be useful for the faculty as a whole to be aware of those issues.

But in my mind, all of those advantages are outweighed by the tendency of these discussions to create a culture in which there's a generally accepted consensus opinion about which students are doing well and which students are not doing so well. I would prefer, and I think for good reason, to look at the graduate students in my seminar the first day, or to look at a graduate student who asks me to be on her dissertation committee, without the burden of knowing what the other faculty think about her. It's widely accepted in educational psychology that teachers' initial impressions about which students are likely to succeed and fail have a substantial influence on student performance (the Pygmalion Effect). I want each student to meet each professor with a chance to make a new first impression. Sometimes students struggle early but then end up doing a terrific job. Within reason, we should do what we can to give students the chance to leave early poor performance behind them, rather than reiterate and generally communicate a negative perception (especially if that negative perception might partly be grounded in implicit bias or in vague impressions about who "seems smart"). Also, some students will have conflicts with some of their professors, either due to personality differences or due to differences in philosophical style or interests, and it's somewhat unfair to such students for a professor to have a platform to communicate a negative opinion without the student's having a similar platform.

I don't want to give the impression that these faculty meetings are about bad-mouthing students. At UCR, the opposite is closer to the truth. Faculty are eager to pipe in with praise for the students who have done well in their courses, and negative remarks are usually couched very carefully and moderately. We like our students and we want them to do well! The UCR Philosophy Department has a reputation for being good to its graduate students -- a reputation which is, in my biased view, well deserved. (This makes me somewhat hesitant to express my concerns about these year-end meetings, out of fear that my remarks will be misinterpreted.) But despite the faculty's evident well-meaning concern for, and praise of, and only muted criticism of, our graduate students in these year-end meetings, I retain my concerns. I imagine the situation is considerably worse, and maybe even seriously morally problematic, at departments with toxic faculty-student relations.

What's to be done instead?

One possibility is that the graduate advisor get input privately from the other faculty (either face to face or by email), in light of which she can give feedback to her advisees. In fact, private communication might be epistemically better, since communicating opinions independently, rather than in a group context, will presumably reduce the problematic human tendency toward groupthink -- though there's also the disadvantage that private input is less subject to correction, and perhaps (depending on the interpersonal dynamics) less likely to be thoughtfully restrained, than comments made in a faculty meeting.

Another possibility is to drop the goal of having the faculty attempt an overall summary assessment of the quality of the students. For awards and fellowships, early-career students can be assessed based on grades and timely completion of requirements. And advanced students can be nominated for awards and fellowships directly by their supervising faculty without the filter of impressions that other faculty might have of that student based on the student's coursework from years ago. And students can, and presumably do, hear feedback from individual faculty separately, a practice that can be further encouraged.

As I mentioned, my opinion is only tentative and I'd be interested to hear others' impressions. Please, however, no comments that reveal the identity of particular people.

[image source]

Wednesday, June 11, 2014

Philosopher's Carnival #164

The Philosophers' Carnival, as you probably know, posts links to selected posts from around the blogosphere, chosen and hosted by a different blogger every month. Since philosophers are basically just children in grown-up bodies (as a Gopnik student, I intend this as flattery), I use a playground theme.

The Philosophy of Mind Sandpit:
I charge into the sandpit. There's David Papineau with his cricket bat staring at me, incredibly focused -- but why does a batter need to be focused if batting is just reflex responsiveness? There must be something more. But we don't know what it is, says R. Scott Bakker, most of whose Three Pound Brain is, he admits, a mystery to him. We're all blind (to the machinery of our cognitive activity) but we're blind to this blindness, and so invent dualist ontologies. Why am I digging here, then? I don't know. Why do I believe he might be wrong? I don't know that either! Scott agrees: I have no idea why I believe he might be wrong. But at least, says Wolfgang Schwarz, my disbelief is very fine-grained, you know, like this sand right here.

The Curving Tunnel of Logic and Language:
Into the darkness we go, with Jason Zarri's fuzzy argument for crisp negation. I seem to be turned around, in a half-true circle! Worse still, I seem to be stuck with a correspondence theory of truth, since Tristan Haze is telling me that my projective-based skepticism about facts is itself a projective-fallacy. Oy, this is dizzier than a whirligig! I try to get out of the tunnel, but here comes Eli Sennesh with two boxes and a nearly-omniscient demon and he's trying to get me the million dollars instead of the thousand I thought I knew I was rationally doomed to.

The Epistemic Slide:
Hi, Richard Chappell! Would you like to play this little non-normative game with me, called "seeking the truth"? No? You say that my continued attachment to such a game is arbitrary by my own lights? Wah! Good thing I don't believe that your criticism has any objective normative merit. La-la-la. Meanwhile, Ralph Wedgwood from Certain Doubts is trying to get things -- pieces of knowledge, or is it gum? -- to adhere to me, as long as they adhere in the sense that if and only if the case were sufficiently similar with respect to what makes it rational for me to believe P1 in C1 would I also believe P2 in C2. Good thing Richard Pettigrew has given me a metric for determining how inaccurate my total doxastic state is!

The Moral Teeter-Totter:
Look over there! Jonny Pugh is bouncing up and down, tip, don't tip, tip, don't tip -- I think he might tip right over on the question of whether new technologies that make the option to tip more salient will and should change the culture of tipping. Stacey Goguen at Feminist Philosophers has a nice compilation of recent reflections on the ups and downs of "trigger warnings" in the classroom. And now here's Alexander Pruss telling me that intentionally making babies is morally wrong because I can't have any specific baby's good in mind and I shouldn't make a baby for reasons that don't include the specific baby's own good. Fine with me! Making babies is gross. And if some of us kids do it anyway, it was only by accident, when we were playing doctor.

The Philosophy of Science Picnic Table:
Ah, there's Scott Aaronson, looking skeptically at the consciousness sandwich Guilio Tononi gave him. Evidently, Guilio told him the moon is made of peanut butter. But Dick Dorkins at Genotopia isn't worried. In fact, he's pleased that he finally has really scientifically solid evidence, that the scientists themselves (but not the Wall Street Journal) are too wimpy to embrace, that his British marmite-and-lard is superior to the tawnier sandwiches of people from more southerly continents or subcontinents or whatever they are. (Do I see his tongue in his cheek?)

The Historical Jungle Gym:
Lunch is over. Time to climb around and get sick! Barry Stocker at NewAPPS is on top of the jungle gym, wondering why more people aren't thinking about the ancient skeptic Sextus Empiricus as a virtue ethicist. I don't know! A dubious proposition. But I'm at peace with that.

Fingerpainting Aesthetics on the Playground Walls:
See that familiar avian aesthetician over there, drawing pictures of Christopher Nolan's cinematic femmes fatales? They might not be what they seem! Wait, does that woman have two faces?

Metaphilosophical/Issues-in-the-Profession Party Poopers:
Look, I just want to pick a side, say something that makes sense, and stop, okay John Holbo. All this thinking is too hard. So don't try to diagnose why all the kids around here are such bad philosophical writers. It's because Aunt Flo can't buy me enough electric blue Gogurt on minimum wage. And here is Eric Schliesser, criticizing poor Slavoj Zizek just for telling his students "if you don’t give me any of your shitty papers, you get an A". Slavoj wants to be nice. He really does. Really, really, he does. And he would be nice if he weren't always surrounded by stupid, incompetent jerks unlike himself.

The next carnival will be hosted in a month at Siris. You may submit suggestions for inclusion in the next carnival (from your own blog or favorite posts from others' blogs) at the Philosopher's Carnival homepage.

[Revised 2:22 pm.]

Monday, June 09, 2014

Comic Schadenfreude and the Schadenfreude of Grace (by Jason Gray and Eric Schwitzgebel)

There isn't a lot of philosophical (or even psychological) work on schadenfreude -- the pleasure people sometimes feel at witnessing or hearing about (but not personally causing) the suffering of others. But the most prominent analyses treat it as a type of pleasure one feels seeing someone get their comeuppance. John Portman calls schadenfreude "an emotional corollary of justice" (2000, p. 197*). Aaron Ben-Ze'ev suggests that a typical feature is that the sufferer deserves the misfortune (1992, p. 41). Frans de Waal suggests that schadenfreude "derives from a sense of fairness" (1996, p. 85).

We could define schadenfreude as involving just deserts, for the sake of philosophical analysis. But doing so misses, we think, central cases that should be within the term's scope and which give it its uncomfortable moral coloring.

Consider that staple of "America's Funniest Videos", the groin shot:

And the trampoline accident:

It doesn't seem that these are instances of justice delivered. We are laughing at -- seemingly enjoying -- pain, indifferent to whether it is deserved. If we stipulate that schadenfreude requires desert, we would need a different name for this interesting phenomenon. But rather than do that, let's acknowledge that there are at least two different types of schadenfreude: just-deserts schadenfreude, when the bad guy finally gets what's coming to him, and the comic schadenfreude of America's Funniest Videos and FailBlog. Comic schadenfreude seems to require not justice but rather a kind of absurdity involving pain as an integral component. And unlike the schadenfreude of just deserts, where pleasure can sometimes be found when inexpiable wrongdoing is met with severe pain, comic schadenfreude might require that the injury (or pain) not be too serious.

Still another species of the genus seems to involve neither comic absurdity nor justice: the schadenfreude of grace.

Here's Lucretius:

Sweet it is, when on the great sea the winds are buffeting the waters, to gaze from the land on another's great struggles; not because it is pleasure or joy that any one should be distressed, but because it is sweet to perceive from what misfortune you yourself are free. Sweet is it too, to behold great contests of war in full array over the plains, when you have no part in the danger (On the Nature of Things, Book II.1ff., Bailey trans.).
And Hobbes:
from what passion proceedeth it, that men take pleasure to behold from the shore the danger of them that are at sea in a tempest, or in fight, or from a safe castle to behold two armies charge one another in the field? It is certainly in the whole sum joy, else men would never flock to such a spectacle. Nevertheless there is in it both joy and grief. For as there is a novelty and remembrance of own security present, which is delight; so is there also pity, which is grief. But the delight is so far predominant, that men usually are content in such a case to be spectators of the misery of their friends (Human Nature, IX.19).

Evidently, people throughout the ages have found great pleasure standing atop the bluff in a storm, watching sailors below die on the rocks. Lucretius and Hobbes suggest, plausibly we think, that for many viewers an important part of the the pleasure derives from how salient another’s suffering makes your own safety by comparison. Similarly, perhaps, reading a history of war and genocide can put into perspective one's own complaints about the erroneous telephone bill and the journal rejections.

Indeed, the very fact that the suffering of the others is undeserved lends the schadenfreude of grace its particular bittersweet flavor. If the sailors or soldiers were fools or villains then it's maybe just harsh justice to see them die from their bad choices, and we have something closer to the schadenfreude of just deserts; but if they did nothing wrong or foolish and it could just as easily have been you, then it's both more a shame for them (the bitter) and also more vividly pleasing how lucky you yourself are (the sweet): There but for undeserved grace go I.

The schadenfreude of just deserts, comic schadenfreude, and the schadenfreude of grace do not exhaust the list of schadenfreudes, we think. There are at least two more: the schadenfreude of envy, and pathological forms of erotic schadenfreude (not to be confused with consensual play-acting sadism). We also suspect that these different types of schadenfreude can sometimes merge into a single complex emotion.

Probably no unified analysis of the psychological mechanisms suffices to cover all types, and they differ substantially in what they reveal about the moral character of the person who is moved by them. Comeuppance is only the start of it.


* Though comeuppance seems to be Portman's take-home message, his overall view is nuanced and anticipates some of the points of this post.

Wednesday, June 04, 2014

A Theory of Jerks

My theory of the jerk is out in Aeon.

From the intro:

Picture the world through the eyes of the jerk. The line of people in the post office is a mass of unimportant fools; it’s a felt injustice that you must wait while they bumble with their requests. The flight attendant is not a potentially interesting person with her own cares and struggles but instead the most available face of a corporation that stupidly insists you shut your phone. Custodians and secretaries are lazy complainers who rightly get the scut work. The person who disagrees with you at the staff meeting is a dunce* to be shot down. Entering a subway is an exercise in nudging past the dumb schmoes.

We need a theory of jerks. We need such a theory because, first, it can help us achieve a calm, clinical understanding when confronting such a creature in the wild. Imagine the nature-documentary voice-over: ‘Here we see the jerk in his natural environment. Notice how he subtly adjusts his dominance display to the Italian restaurant situation…’ And second – well, I don’t want to say what the second reason is quite yet.



* Instead of "dunce" the original piece uses "idiot". In light of Shelley Tremain's remarks to me about the history of that word, I'm wondering whether I should have avoided it. In my mind, it is exactly the sort of word the jerk is prone to use, and how he is prone to think of people, so there's a conflict here between my desire to capture the worldview of the jerk with phenomenological accuracy and my newly heightened sensitivity to the historical associations of that particular word.

[illustration by Paul Blow]

Tuesday, June 03, 2014

Okay, I Need This Shirt

It might help convince the audience of my seriousness if I wear it at my next public lecture.

Order here.

Two Million Pageviews

The stats aren't totally straightforward, since I switched counters a few years ago, but it looks like The Splintered Mind has had more than two million pageviews since I launched it in 2006.

That's about how many views this video gets in 16 hours:

Which is, you know, actually way better than I would have guessed.

Sunday, June 01, 2014

Aaronson vs. Tononi on the Integrated Information Theory of Consciousness

Here. A sample:

Personally, I give Giulio enormous credit for having the intellectual courage to follow his theory wherever it leads. When the critics point out, “if your theory were true, then the Moon would be made of peanut butter,” he doesn’t try to wiggle out of the prediction, but proudly replies, “yes, chunky peanut butter—and you forgot to add that the Earth is made of Nutella!”
(And I thought I played rough.)

Friday, May 30, 2014

Goldfish-Pool Immortality

Must an infinitely continued life inevitably become boring? Bernard William famously answers yes; John Fischer no. Fischer's case is perhaps even more easily made than he suggests -- but its very ease opens up new issues.

Consider Neil Gaiman's story "The Goldfish Pool and Other Stories" (yes, that's the name of one story):

He nodded and grinned. "Ornamental carp. Brought here all the way from China."

We watched them swim around the little pool."I wonder if they get bored."

He shook his head. "My grandson, he's an ichthyologist, you know what that is?"

"Studies fishes."

"Uh-huh. He says they only got a memory that's like thirty seconds long. So they swim around the pool, it's always a surprise to them, going 'I've never been here before.' They meet another fish they known for a hundred years, they say, 'Who are you, stranger?'"

The problem of immortal boredom solved: Just have a bad memory! Then even seemingly un-repeatable pleasures (meeting someone for the first time) become repeatable.

Now you might say, wait, when I was thinking about immortality I wasn't thinking about forgetting everything and doing it again like a stupid goldfish.

To this I answer: Weren't you?

If you were imagining that you were continuing life as a human, you were imagining, presumably, that you had a finite brain capacity. And there's only so much memory you can fit into eighty billion neurons. So of course you're going to forget things, at some point almost everything, and things sufficiently well forgotten could presumably be experienced as fresh again. This is always what is going on with us anyway, to some extent. And this forgetting needn't involve any loss of personal identity, it seems: one's personality and some core memories could always stay the same.

Immortality as an angel or transhuman super-intellect raises the same issues, as long as one's memory is finite.

A new question arises perhaps more vividly now: Is repeating and forgetting the same types of experiences over and over again, infinitely, preferable to doing them once, or twenty times, or a googolplex times? The answer to that question isn't, I think, entirely clear (and maybe even faces metaphysical problems concerning the identity of indiscernibles). My guess, though, is that if you stopped one of the goldfish and said, "Do you want to keep going?", the fish would say, "Yes, this is totally cool, I wonder what's around the corner? Oh, hi, glad to meet you!" Maybe that's a consideration in favor.

Alternatively, you might imagine an infinite memory. But how would that work? What would that be like? Would one become overwhelmed like Funes the Memorious? Would there be a workable search algorithm? Would there be some tagging system to distinguish each memory from infinitely many qualitatively identical other memories? Or maybe you were imagining retaining your humanity but somehow existing non-temporally? I find that even harder to conceive. To evaluate such possibilities, we need a better sense of the cognitive architecture of the immortal mind.

Supposing goldfish-pool immortality would be desirable, would it be better to have, as it were, a large pool -- a wide diversity of experiences before forgetting -- or a small, more selective pool, perhaps one peak experience, repeated infinitely? Would it be better to have small, unremembered variations each time, or would detail-by-detail qualitative identity be just as good?

I've started to lose my grip on what might ground such judgments. However, it's possible that technology will someday make this a matter of practical urgency. Suppose it turns out, someday, that people can "upload" into artificial environments in which our longevity vastly outruns our memorial capacity. What should be the size and shape of our pool?

[image source]

Tuesday, May 27, 2014


Ergo, a new online philosophy journal, has just released its first issue. Open access, triple anonymous, fast turnaround times (hopefully continuing into the future), transparent process, aiming at a balanced representation of all the subdisciplines. What's not to like?

I hope and expect that this journal will soon count among the most prestigious venues in philosophy.

Friday, May 23, 2014

Metaphilosophical Tides in the Literature on Belief

Why should a philosopher care about the nature of belief? Back in the 1980s and 1990s, when I was a student, there were two main animating reasons in the Anglophone philosophical community. Recently, though, the literature has changed.

One of the old-school reasons was to articulate the materialistic picture of the world. The late 1950s through the early 1990s -- roughly from Smart's "Sensations and Brain Processes" through Dennett's Consciousness Explained -- was (I now think) the golden age of materialism in the philosophy of mind, when the main alternatives and implications were being seriously explored by the philosophical community for the first time. We needed to know how belief fit into the materialist world-picture. How could a purely material being, a mere machine fundamentally constituted of tiny bits of physical stuff bumping against each other, have mental states about the world, with real representational or intentional content? The functionalism and evolutionary representationalism of Putnam, Armstrong, Dennett, Millikan, Fodor, and Dretske seemed to give an answer.

The other, related, motivating reason was the theory of reference in philosophy of language. How is it possible to believe that Superman is strong but that Clark Kent is not strong, if Superman really is Clark Kent (Frege's Puzzle)? And does the reference of a thought or utterance depend only on what was in the head (internalism) or could two molecule-for-molecule identical people have different thought contents simply because they're in different environments (externalism). Putnam's Twin Earth was amazingly central to the field. (In 2000, Joe Cruz and I sketched out a "map of the analytic philosopher's brain". Evidence seemed to suggest a major lobe dedicated entirely to Twin Earth, but only a small nodule for the meaning of life.)

These inquiries had two things in common: their grand metaphysical character -- defending materialism, discovering the fundamental nature of thought and language -- and their armchair methodology. Some of the contributors such as Fodor and Dennett were very empirically engaged in general, but when it came to their central claims about belief, they seemed to be mainly driven by thought experiments and a metaphysical world vision.

Literature on the nature of belief has been re-energized in the 2010s, I think, by issues less grand but more practical -- especially the issue of implicit bias, but more generally the question of how to think about cases of seeming mismatch between explicit thought or speech and spontaneous behavior. Tamar Gendler's treatment of (implicit) alief vs. (explicit) belief, especially, has spawned a whole subliterature of its own, which is intimately connected with the recent psychological literature on dual process theory or "thinking fast and slow". Does the person who says, in all attempted sincerity, "women are just as smart as men", but who (as anyone else could see) consistently treats women as stupid, believe what he's saying? Delusions present seemingly similar cases, such as the Cotard delusion which involves seemingly sincerely claiming that one is dead. What are we to make of that? There's a suddenly burgeoning philosophical subliterature on delusion, much of it responding to Lisa Bortolotti's recent APA prizewinning book on the topic.

By most standards, the issues are still grand and impractical and the approach armchairish -- this is philosophy after all! -- but I believe their metaphilosophical spirit is very different. What animates Gendler, Bortolotti, and the others, I think, is a hard look at particularly puzzling empirical issues, where it seems that a good philosophical theory of the nature of the phenomena might help clear things up, and then a pragmatic approach to evaluating the results. Given the empirical phenomena, are our interests best served by theorizing belief in this way, or are they better served by theorizing in this other way?

This is music to my ears, both metaphilosophically and regarding the positive theory of belief. Metaphilosophically, because it is exactly my own approach: I entered the literature on belief as a philosopher of science interested in puzzles in developmental psychology that I thought could be dissolved through application of a good theory of belief. And at level of the positive theory of belief, because my own theory of belief is designed exactly to shine as means of organizing our thoughts about such splintering (The Splintered Mind!), seemingly messed-up cases.

Friday, May 16, 2014

Group Organisms and the Fermi Paradox

I've been thinking recently about group organisms and group minds. And I've been thinking, too, about the Fermi Paradox -- about why we haven't yet discovered alien civilizations, given the vast number of star systems that could presumably host them. Here's a thought on how these two ideas might meet.

Species that contain relatively few member organisms, in a small habitat, are much more vulnerable to extinction than are species that contain many member organisms distributed widely. A single shock can easily wipe them out. So my thought is this: If technological civilizations tend to merge into a single planetwide superorganism, then they become essentially species constituted by a single organism in one small habitat (small relative to the size of the organism) -- and thus highly vulnerable to extinction.

This is, of course, a version of the self-destruction solution to Fermi's paradox: Technological civilizations might frequently arise in the galaxy, but they always destroy themselves quickly, so none happen to be detectable right now. Self-destruction answers to Fermi's paradox tend to focus on the likelihood of an immensely destructive war (e.g., nuclear or biological), environmental catastrophe, or the accidental release of destructive technology (e.g., nanobots). My hypothesis is compatible with all of those, but it's also, I think, a bit different: A single superorganism might die simply of disease (e.g., a self-replicating flaw) or malnutrition (e.g., a risky bet about next year's harvest) or suicide.

For this "solution" -- or really, at best I think, partial solution -- to work, at least three things would have to be true:

(1.) Technological civilizations would have to (almost) inevitably merge into a single superorganism. I think this is at least somewhat plausible. As technological capacities develop, societies grow more intricately dependent on the functioning of all their parts. Few Californians could make it, now, as subsistence farmers. Our lives are entirely dependent upon a well-functioning system of mass agriculture and food delivery. Maybe this doesn't make California, or the United States, or the world as a whole, a full-on superorganism yet (though the case could be made). But if an organism is a tightly integrated system each of whose parts (a.) contributes in a structured way to the well-being of the system as a whole and (b.) cannot effectively survive or reproduce outside the organismic context, then it's easy to see how increasing technology might lead a civilization ever more that direction -- as the individual parts (individual human beings or their alien equivalents) gain efficiency through increasing specialization and increased reliance upon the specializations of others. Also, if we imagine competition among nation-level societies, the most-integrated, most-organismic societies might tend to outcompete the others and take over the planet.

(2.) The collapse of the superorganism would have to result in the near-permanent collapse of technological capacity. The individual human beings or aliens would have to go entirely extinct, or at least be so technologically reduced that the overwhelming majority of the planet's history is technologically primitive. One way this might go -- though not the only way -- is for something like a Maynard Smith & Szathmary major transition to occur. Just as individual cells invested their reproductive success into a germline when they merged into multicellular organisms (so that the only way for a human liver cell to continue into the next generation is for it to participate in the reproductive success of the human being as a whole), so also human reproduction might become germline-dependent at the superorganism level. Maybe our descendents will be generated from government-controlled genetic templates rather than in what we now think of as the normal way. If these descendants are individually sterile, either because that's more efficient (and thus either consciously chosen by the society or evolutionarily selected for) or because the powers-that-be want to keep tight control on reproduction, then there will be only a limited number of germlines, and the superorganism will be more susceptible to shocks to the germline.

(3.) The habitat would have to be small relative to the superorganism, with the result that there were only one or a few superorganisms. For example, the superorganism and the habitat might both be planet sized. Or there might be a few nation-sized superorganisms on one planet or across several planets -- but not millions of them distributed across multiple star systems. In other words, space colonization would have to be relatively slow compared to the life expectancy of the merged superorganisms. Again, this seems at least somewhat plausible.

To repeat: I don't think this could serve as a full solution to the Fermi paradox. If high-tech civilizations evolve easily and abundantly and visibly, we probably shouldn't expect all of them to collapse swiftly for these reasons. But perhaps it can combine with some other approaches, toward a multi-pronged solution.

It's also something to worry about, in its own right, if you're concerned about existential risks to humanity.

[image source]

Monday, May 12, 2014

New Essay in Draft: 1% Skepticism

My latest in crazy, dijunctive metaphysics:


A 1% skeptic is someone who has about a 99% credence in non-skeptical realism and about a 1% credence in the disjunction of all radically skeptical scenarios combined. The first half of this essay defends the epistemic rationality of 1% skepticism, appealing to dream skepticism, simulation skepticism, cosmological skepticism, and wildcard skepticism. The second half of the essay explores the practical behavioral consequences of 1% skepticism, arguing that 1% skepticism need not be behaviorally inert.
Full version here.

(What I mean by crazy metaphysics.)
(What I mean by disjunctive metaphysics.)

As always, comments/reactions/discussion welcome, either as comments on this post or by direct email to me.

Tuesday, April 29, 2014

The 1935 Preface to Kant-Studien

When I was in Berlin in 2010, I spent some time in the Humboldt University library, looking through philosophy journals from the Nazi era, in connection with my interest in the extent to which German philosophers either embraced or resisted Nazism. (Summary version: about 1/3 embraced Nazism, about 1/3 rejected Nazism, and about 1/3 ducked their heads and kept quiet.)

The journals differed in their degree of Nazification. Perhaps the most Nazified was Kant-Studien, which at the time was one of the leading German-language journals of general philosophy (not just a journal for Kant scholarship). The old issues of Kant-Studien aren't available online, but I took some photos. Here, Sascha Fink and I have translated the preface to Kant-Studien Vol. 40 (1935), p. 3-4 (emphasis added):


Kant-Studien, now under its new leadership that begins with this first issue of the 40th volume, sets itself a new task: to bring the new will, in which the deeper essence of the German life and the German mind is powerfully realized, to a breakthrough in the fundamental questions as well as the individual questions of philosophy and science.

Guiding us is the conviction that the German Revolution is a unified metaphysical act of German life, which expresses itself in all areas of German existence, and which will therefore – with irresistible necessity – put philosophy and science under its spell.

But is this not – as is so often said – to snatch away the autonomy of philosophy and science and give it over to a law alien to them?

Against all such questions and concerns, we offer the insight that moves our innermost being: That the reality of our life, that shapes itself and will shape itself, is deeper, more fundamental, and more true than that of our modern era as a whole – that philosophy and science, which compete for it, will in a radical sense become liberated to their own essence, to their own truth. Precisely for the sake of truth, the struggle with modernity – maybe with the basic norms and basic forms of the time in which we live – is necessary. It is – in a sense that is alien and outrageous to modern thinking – to recapture the form in which the untrue and fundamentally destroyed life can win back its innermost truth – its rescue and salvation. This connection of the German life to fundamental forces and to the original truth of Being and its order – as has never been attempted in the same depth in our entire history – is what we think of when we hear that word of destiny: a new Reich.

If on the basis of German life German philosophy struggles for this truly Platonic unity of truth with historical-political life, then it takes up a European duty. Because it poses the problem that each European people must solve, as a necessity of life, from its own individual powers and freedoms.

Again, one must – and now in a new and unexpected sense, in the spirit of Kant’s term, “bracket knowledge” [das Wissen aufzuheben]. Not for the sake of negation: but to gain space for a more fundamental form of philosophy and science, for the new form of spirit and life [für die neue Form ... des Lebens Raum zu gewinnen]. In this living and creative sense is Kant-Studien connected to the true spirit of Kantian philosophy.

So we call on the productive forces of German philosophy and science to collaborate in these new tasks. We also turn especially to foreign friends, confident that in this joint struggle with the fundamental questions of philosophy and science, concerning the truth of Being and life, we will gain not only a deeper understanding of each other, but also develop an awareness of our joint responsibility for the cultural community of peoples.

-- H. Heyse, Professor of Philosophy, University of Königsberg


In the 1910s through 1930s, especially in Germany, philosophers tended to occupy the political right (including cheering on World War I and ostracizing Bertrand Russell for not doing so) -- deploying, as here, the tools of their discipline in the service of what we can now recognize as hideous views. Heidegger was by no means alone in doing so, nor the worst offender.

The political views of the mainstream 21st-century philosophical community are very different and, I'd like to think, much better grounded. It would be nice, though, if we had a more trustworthy method for distinguishing tissues of noxious rationalization from real philosophical insight.


For a transcription of the original German, see the Underblog.

For a fuller historical discussion of the role of Kant-Studien in the Third Reich, see this article (in German).

If you zoom in on the title-page image above, you will see that it promises two pictures of Elisabeth Foerster-Nietzsche, Nietzsche's famously antisemitic sister. The volume does include two full-page photos of her (though one appears to be merely a close-up of the other), alongside a fawning obituary of the "wise, gracious" Elisabeth.

Wednesday, April 23, 2014

How to Be a Part of God's Mind

I'm at the biennial Tuscon conference Toward a Science of Consciousness, so wild speculation about consciousness is the order of the day! In the first plenary session, psychologist Don Hoffman argued that the world contains no physical objects, only minds in interaction with each other, each of which is massively deluded about its environment. After that, my paper in the next session arguing that "if materialism is true, the United States is probably conscious" seemed relatively tame. So in the spirit of the day, let me uncork another one of wild possibilities I've recently been considering: idealist pantheism, the view that the world consists only of one thing, God's mind.

Might idealist pantheism be true? I'm not sure why it couldn't be. I can't refute it by, say, kicking a stone. Seeming tactile and visual experiences of stones, without physical stones underneath, might all be part of God's plan. It's a bizarre view, perhaps, sharply in conflict with common sense. But something bizarre might well be true. Indeed, I've argued that something bizarre must be true about the basic structure of the cosmos: Common sense is not well-tuned to get it right about such matters, and all of the viable options (e.g., multiverse theory) appear to be highly bizarre.

If idealist pantheism is true, then my mind would have to be part of God's mind. How would that work?

We would have to deny a certain version of the view that consciousness is unified. Assuming that you exist and that I can neither access your thoughts directly nor experience your thoughts as my own, then it must be the case that some parts of God's mind are out of touch with other parts. I see no incoherence in this idea, though, as long as we allow divine mental unity at some higher level of organization.

Divine mental unity might work in part through introspection. God might be able to directly introspect the contents of each individual's mind. On an access view of introspection, this might involve God's having direct access to the contents of each of our minds rather than indirect access (via perception of our bodies). We might imagine a causal process by which each mental state of each individual mind directly produces a judgment, in some part of God's mind to which no individual person has access, that that person is in that mental state. One way this might be realized would be through a divine version of Global Workspace Theory: Each person might be like a separate processing module in the cosmic mind, whose contents are fed into a divine cognitive processing system that integrates the inputs.

But in order for this to be introspection rather than perception, these inputs into the divine mental workspace would have to be inputs from pieces of God's mind rather than inputs from things external to God's mind. And that means that God would have to think with and through us, instead of merely about us. And this probably requires some kind of divine limitation or restraint or trust. If every one of my thoughts is independently assessed by God and handled suspiciously -- if those thoughts do not, in some sense, normally speak for God or for some part of God, if those thoughts are normally held at a distance for evaluation as though not God's own, then I think what we would have is not pantheism but rather the more ordinary view that I am one thing and God is another thing who judges me.

What I am imagining, then, is a rather unusual conjunction of views: vast divine knowledge of the contents of our minds combined with a lack of divine mental independence. God would have to have lots of knowledge but not a lot of processing power in the central workspace -- whatever processing power God has would have to be to a substantial extent actually distributed among us. If so, then presumably our collective judgment would have to in some manner constitute the divine judgment and probably too our collective action would have to in some manner constitute divine action. Otherwise we would not be part of God's mind but something outside of God.

Let me admit that the likelihood of all this being true seems to me rather small -- though since it seems at least possible and since I mistrust common sense in matters cosmological, I'm not sure what justifies my inclination against it.

[image source]

Monday, April 14, 2014

What Kelp Remembers

Weird Tales, one of the best and oldest horror and dark fantasy magazines, has just launched a new series of ultra-short flash fiction (under 500 words), Flashes of Weirdness. To inaugurate the series, they've chosen a piece of mine -- which is now my second publication in speculative fiction.

My philosophical aim in the story -- What Kelp Remembers -- is to suggest that on a creationist or simulationist cosmology, the world might serve a very different purpose than we're normally inclined to think.

At some point, I want to think more about the merit of science fiction as a means of exploring metaphysical and cosmological issues of this sort. I suspect that fiction has some advantages over standard expository prose as a philosophical tool in this area, but I'm not satisfied that I really understand why.

Friday, April 11, 2014

Meta-Analysis of the Effect of Religion on Crime: The Missing Positive Tail

I think the most recent meta-analysis of the relationship between religosity and crime is still Baier and Wright 2001. I'm reviewing it again in preparation for a talk I'm giving Sunday on what happens when there's a non-effect in psychology but researchers are disposed to think there must be an effect.

I was struck by this graph from the Baier and Wright:

Note that the x-axis scale is negative, showing the predicted negative relationship between religiosity and crime. (Religiosity is typically measured either by self-reported religious belief or by self-reported religious behavior such as attendance at weekly services.)

The authors comment:

The mean reported effect size was r = -.12 (SD = .09), and the median was r = -.11. About two-thirds of the effects fell between -.05 and -.20, and, significantly, none of them was positive. (p. 14, emphasis added).
Hm, I think. No positive tail?! I'm not sure that I would interpret that fact the same way Baier and Wright seem to.

Then I think: Hey, let's try some Monte Carlos!

Baier and Wright report 79 effect sizes from previous studies, graphed above. Although the distribution doesn't look quite normal, I'll start my Monte Carlos by assuming normality, using B&W's reported mean and SD. Then I'll generate 10,000 sets of 79 random values (representating hypothetical effect sizes) normally distributed around that mean and standard deviation (SD).

Of the 10,000 simulated distributions of 79 effect sizes with that mean and SD, only 9 distributions (0.09%) are entirely zero to negative. So I think we can conclude that it's not chance that the positive tail is missing. Options are: (a.) The population mean is higher than B&W report or the SD is lower, (b.) The distribution isn't normal, (c.) The positive effect-size studies aren't being reported.

My money is on (c). But let's try (a). How high would the mean have to be (holding SD fixed) for at least 20% of the Monte Carlos to show no positive values? In my Monte Carlos it happens between mean -.18 and -.19. But the graph above is clearly not a graph of a sample from a population with that mean (which would be near the top of the fourth bar from left). This is confirmable by a t-test on the distribution of effect sizes reported in their study (one-sample vs. -.185, p < .001). Similar considerations show that it can't be an SD issue.

How about (b)? The eyeball distribution looks a bit skewed, anyway -- maybe that's the problem? The graph can be easily unskewed simply by taking the square root of the absolute values of the effect sizes. The resulting distribution is very close to normal (both eyeball and by Anderson-Darling). This delivers the desired conclusion: Only 35% of my Monte Carlos end up with even a single positive-tail study, but it delivers this result at the cost of making sense. Taking the square root magnifies the difference between very small effect sizes and diminishes the difference between large effect sizes, inflating the difference between a study with effect size r = .00 and a study with effect size r = -.02 to a larger magnitude difference than the difference between effect size r = -.30 and effect size r = -.47. (All these r's are actually present in the B&W dataset.) The two r = .00 studies in the B&W dataset become outliers far from the three r = -.02 studies in their dataset, and it's this artificial inflation of that immaterial difference that explains the seeming Monte Carlo confirmation after the square-root "correction".

So the best explanation would seem to be (c): We're not seeing the missing tail because, at least as of 2001, the research that would be expected, even if just by chance, to show even a non-significant positive relationship between religiosity and crime simply isn't published.

If researchers also show a systematic bias toward publishing their research that shows the largest negative relationship between religiosity and crime, we can even get something like Baier and Wright's distribution with a mean effect size of zero.

Here's the way I did it: I assumed that mean effect size of religiosity on crime is 0.0 and the SD for the effect size among the studies was 0.12. I assumed 100 researchers, 25% of whom only ran one independent analysis, 25% of whom ran 2 analyses, 25% of whom ran 4, and 25% of whom ran 8. I assumed that each researcher published only their "best" result (i.e., greatest negative relationship), but only if the trend was non-positive. I then ran 10,000 Monte Carlos. The average number of studies published was 80, the average published study's effect size was r = -.12, and the average SD of the effect sizes was .08.

And it wasn't too hard to find a graph like this:

Pretty similar except for Baier & Wright's two outlier studies.

I don't believe that this analysis shows that religion and crime are unrelated. I suspect they are related, if in no other way than by means of uncontrolled confounds. But I do think this analysis suggests that a non-effect plus a substantial positivity bias in publication could result in a pattern of reported effects that looks a lot like the pattern that is actually reported.

This is, of course, a file-drawer effect, and perhaps it could be corrected by a decent file-drawer analysis. But: Baier and Wright don't attempt such an analysis. And maybe more importantly: The typical Rosenthal-style file-drawer analysis assumes that the average unpublished result has an effect size of zero, whereas the effect above involves removing wrong-sign studies disproportionately often, and so couldn't be fully corrected by such an analysis.

Thursday, April 10, 2014

Philosophy Festival: "How the Light Gets In"

Interesting philosophy festival coming up in late May, in western England: Speakers are drawn from a wide range of fields in addition to philosophy, both in the sciences and arts. Among the philosophy speakers are Thomas Pogge, Huw Price, John Heil, Simon Blackburn, Angie Hobbs, Ted Honderich, Margaret Boden, Mark Rowlands, Jennifer Hornsby, Nancy Cartwright, Barry C. Smith, James Ladyman, Daniel Stoljar, Bernard-Henri Levy, Hubert Dreyfus, and Mary Midgley. Lots of other super-cool folks too: Stephen King, Roger Penrose, Cory Doctorow....

Wish I could be there!

Monday, April 07, 2014

The Incredible Shrinking Kid

Tania Lombrozo's newest post at NPR reminded me of a phenomenon I've often noticed: After going away on a trip for several days, when I return home it seems to me that my children have grown enormously over those few days!

It's not that they've actually grown, of course. My hypothesis is this: During my time away, my memory of my children grows a bit vaguer. Whereas my memory of them when I come home tonight might be an average of their appearance over the last few days, my memory when I come home after a week away might be an average of their appearance over a longer span of time -- maybe a month or two. Then when I return, they seem to have done a month's worth of growing in just that one week. The effect has been most striking during the periods my children have grown fastest (infancy to early childhood, and then my son's incredible middle-school growth spurt).

I'm not sure I'd test this hypothesis by drawing lines on the wall, as the researchers did in the article Lombrozo discusses. I suspect that my memory of my children's height is much more accurate than can be measured by wall markings -- e.g., that I'd easily notice an inch of growth, even if I might be off by several inches if asked to estimate their heights on a blank wall. A more valid measure, if it can be done right, might be to artificially age a picture by tweaking it slightly toward or away from the kindchenschema (the characteristic infantile facial features that slowly fade as we age).

Friday, April 04, 2014

A Negative, Pluralist Account of Introspection

What is introspection? Nothing! Or rather, almost everything.

A long philosophical tradition, going back at least to Locke, has held that there is a distinctive faculty by means of which we know our own minds -- or at least our currently ongoing stream of conscious experience, our sensory experience, our imagery, our emotional experience and inner speech. "Reflection" or "inner sense" or introspection is, in this common view, a single type of process, yielding highly reliable (maybe even infallibly certain) knowledge of our own minds.

Critics of this approach to introspection have tended to either:

(a.) radically deny the existence of the human capacity to discover a stream of inner experience (e.g., radical behaviorism);

(b.) attribute our supposedly excellent self-knowledge of experience to some distinctive process other than introspection (e.g., expressivist or transparency approaches, on which "I think that..." is just a dongle added to a judgment about the outside world, no inward attention or scanning required); or

(c.) be pluralistic in the sense that we have one introspective mechanism to scan our beliefs, another to scan our visual experiences, another to scan our emotional experiences....

But here's another possibility: Introspective judgments arise from a range of processes that is diverse both within-case (i.e., lots of different processes feeding any one judgment) and between-case (i.e., very different sets of processes contributing to the judgment on different occasions) and yet also allows that introspective judgments arise partly through a relatively direct sensitivity to the conscious experiences that they are judgments about.

Consider an analogy: You're at a science conference or a high school science fair, quickly trying to take in a poster. You have no dedicated faculty of poster-taking-in. Rather, you deploy a variety of cognitive resources: visually appreciating the charts, listening to the presenter's explanation, simultaneously reading pieces of the poster, charitably bringing general knowledge to bear, asking questions and listening to responses both for overt content and for emotional tone.... It needn't be the same set of resources every time (you needn't even use vision: sometimes you can just listen, if you're in the mood or visually impaired). Instead, you flexibly, opportunistically use a diverse range of resources, dedicated to the question of what are the main ideas of this poster, in a way that aims to be relatively directly sensitive to the actual content of the poster.

Introspection, in my view, is like that. If I want to know what my visual experience is right now, or my emotional experience, or my auditory imagery, I engage not one cognitive process that was selected or developed primarily for the purpose of acquiring self-knowledge; rather I engage a diversity of processes that were primarily selected or developed for other purposes. I look outward at the world and think about what, given that world, it would make sense for me to be experiencing right now; but also I am attuned to the possibility that I might not be experiencing that, ready to notice clues pointing a different direction. I change and shape my experience in the very act of thinking about it, often (but not always) in a way that improves the match between my experience and my judgment about it. I have memories (short- and long-term), associations, things that it seems more natural and less natural to say, views sometimes important to my self-image about what types of experience I tend to have, either in general or under certain conditions, emotional reactions that color or guide my response, spontaneous speech impulses that I can inhibit or disinhibit. Etc. And any combination of these processes, and others besides, can swirl together to precipitate a judgment about my ongoing stream of experience.

Now the functional set-up of the mind is such that some processes' outputs are contingent upon the outputs of other processes. Pieces of the mind stay in sync with what is going on in other pieces, keep a running bead on each other, with varying degrees of directness and accuracy. And so also introspective judgments will be causally linked to a wide variety of other cognitive processes, including normally both relatively short and relatively circuitous links from the processes that give rise to the conscious experiences that the introspective judgments are judgments about. But these kinds of contingencies imply no distinctive introspective self-scanning faculty; it's just how the mind must work, if it is to be a single coherent mind, and it happens preconsciously in systems no-one thinks of as introspective, e.g., in the early visual system, as well as farther downstream.

[For further exposition of this view, with detailed examples, see my essay Introspection, What?]