Wednesday, April 29, 2015

Duplicating the Universe

I've been thinking about two forms of duplication. One is duplication of the entire universe from beginning to end, as envisioned in Nietzsche's eternal return (cf. Poincare's recurrence theorem on a grand scale). The other is duplication within an eternal (or very long) individual life (goldfish-pool immortality). In both cases, I find myself torn among four different evaluative perspectives.

For color, imagine a god watching our universe from Big Bang to heat death. At the end, this god says, "In total, that was good. Replay!" Or imagine an immortal life in which you loop repeatedly (without remembering) through the same pleasures over and over.

Consider four ways of thinking about the value of duplication:

1. The summative view: Duplicating a good thing doubles the world's goodness, all else being equal; and in particular duplicating the universe doubles the total sum of goodness. There's twice as much total happiness overall, for example. Although Nietzsche rejected the ethics of happiness-summing, something in the general direction of the summative view seems to be implicit in his suggestion that if we knew that the universe repeats infinitely, that would add infinite weight to every decision.

2. The indifference view: Repetition adds no value or disvalue, if it is a true repetition (no memory, no development, no audience-god watching saying "oh, I remember this... here comes the good part!"). You might even think, if the duplication is perfect enough, that there aren't even two metaphysically distinct things (Leibniz's identity of indiscernibles).

3. The diminishing returns view: A second run-through is good, but it doesn't double the goodness of the first run-through. For example, the total subjectively experienced happiness might be double, but there's something special about being the first person on the (or "a"?) moon, which is something that never happens in the second run -- and likewise something special about being the last episode of Seinfeld (or "Seinfeld"?) and about being the only copy of a Van Gogh painting (or a "Van Gogh" painting?), which the first run loses if a second run is added.

4. The precious uniqueness view: Expanding the last thought from the diminishing returns view, one might think that duplication somehow cheapens both runs, and that it's better to do things exactly once and be done.

Which of these four views is the best way of thinking about cosmic value (or the value of an extended life)?

You might think that this kind of question isn't amenable to rational argumentation -- that there is no discoverable fact of the matter about whether doubling is better. And maybe that's right. But consider this: Universe A is just like our universe. Universe B is just like our universe, but life on Earth never advances past microbial levels of complexity. If you think Universe A is overall better, or more creation-worthy (or, if you're enough of a pessimist, overall worse) than Universe B, then you think there are facts about the relative value of universes -- in which case, plausibly, there should also be some fact about whether a duplicative universe is a lot better, a little better, the same, or worse than a single-run universe. Yes?

There is, I think, at least a chance that this question, or a relative of it, will become a question of practical ethics in the future -- if we ever become "gods" who create universes of genuinely conscious people running inside of simulated environments (as I discuss here and here), or if we ever have the chance to "upload" into paradises of repetitive bliss.

[image source]

Monday, April 27, 2015

How to Make Van Gogh's "Starry Night" Undulate

Not sure the original source of this one (maybe notbecauseitsironic on Reddit?).

First, look at the center of the image below for about 30 seconds.

Look at the center of this image for 30sec, then watch Van Gogh's *Starry Night* come to life
Then look at Van Gogh's "The Starry Night".
The technique also achieves interesting results when applied to Kincade:
[HT Mariano Aski]

Thursday, April 23, 2015

New Essay: Death and Self in the Incomprehensible Zhuangzi

Every nineteen years, I should write a new essay on the ancient Chinese philosopher Zhuangzi, don't you think? This one should tide me over until 2034, then!

Death and Self in the Incomprehensible Zhuangzi

The ancient Chinese philosopher Zhuangzi defies interpretation. This is an inextricable part of the beauty and power of his work. The text – by which I mean the “Inner Chapters” of the text traditionally attributed to him, the authentic core of the book – is incomprehensible as a whole. It consists of shards, in a distinctive voice – a voice distinctive enough that its absence is plain in most or all of the “Outer” and “Miscellaneous” Chapters, and which I will treat as the voice of a single author. Despite repeating imagery, ideas, style, and tone, these shards cannot be pieced together into a self-consistent philosophy. This lack of self-consistency is a positive feature of Zhuangzi. It is part of what makes him the great and unusual philosopher he is, defying reduction and summary.
Full draft here.

As always, comments, objections, suggestions welcome, either by email or as comments on this post.

See this post from March 5 for a briefer treatment of the same themes.

Wednesday, April 22, 2015

Rules of War, the Card Game, with Deck Management

I think you'll agree that few games are as tedious as the card game war. Unfortunately, my eight-year-old daughter likes the damned thing. So I cooked up some new rules, which make the game considerably more interesting and quicker to resolve.

(What does this have to do with the themes of this blog? Um. If widely adopted, the new rules will substantially reduce humanity's card-game-related dyshedons!)

War with Deck Management

Simple Rules for Two Players:

Deal the 52-card deck face down, 26 cards to each player. As in standard war, each player turns their top card face up on the table. High card wins the trick (ace high, suit ignored). The winner of the trick collects the cards face up in a pile. In case of a tie, there's a "war", and each player lays three "soldier" cards face down then one "general" face up. The highest general wins all ten cards. If the generals tie, repeat. If there aren't enough face-down cards to play out the war, each player shuffles their face-up stack of won tricks and draws randomly from that stack to complete the war, then turns the stack back face up. If a player has insufficient cards to play out the war, that player loses the game.

When both players are out of face-down cards, one round is over. Each player counts their face-up cards openly, for all to see. The player with more cards then discards enough cards to equal the number of cards in the pile of the player with fewer cards. For example, if after Round 1, Player A has 30 cards and Player B has 22, then Player A discards 8 cards of his or her choice, so they both have 22.

Each player then turns their stack face down and shuffles, then plays Round 2 by the same rules as Round 1. After all cards are face up, the player with more cards again discards to match the number of cards in the stack of the player with fewer. This is repeated until one player runs out of cards and loses.

Advantages over Standard War:

  • The game resolves much faster!
  • The winner of each round enjoys discarding away low cards instead of accumulating a bunch of losers.
  • In later rounds, wars are more common because the low cards are removed from the decks, leaving a smaller range of cards to match.
  • Although aces are important, the original distribution of the aces isn't as important as in standard war. This is partly because there are more wars, so there are more chances for aces to change hands as soldiers, and partly because a generally strong deck that wins more total cards gives a major advantage in the discard phase.
  • Advanced Rules with Deck-Order Management:

    Rules as above, except that players may arrange their face down cards in any order they wish. Once the cards are arranged face down, they can't be rearranged, and any wars that require drawing from the face-up pile are still based on random draw from a face-down shuffle.

    Tactics: Since the top card will never be a soldier, you might want to make it your ace. But then if the other player does the same, you'll have a war. Anticipating that, you might make cards 2-4 low and card 5 high. But maybe you know your general will lose if the other player employs the same tactics, so you might surprise them by putting your 2 on top, so that the ace you think they'll play will be wasted gathering a low card. Etc.

    Rules for More Than Two Players:

    Divide the deck equally face down among the players. Any leftover cards go face up in the middle, to be collected by the winner of the first trick. High card wins the trick. If the high card is a tie, then the two (or more) players with the high card play a war. Any remaining player sits out the war, playing neither soldiers nor general. Winner takes all cards.

    The round is over when at most one player has face down cards remaining. Any player out of face down cards before the end of the round sits out the remainder of the round, neither losing nor winning cards. At the end of the round each player counts their total cards. The player with the most cards discards to reduce to the number of cards held by the player with the second most. For example, if after Round 1 Player A has 22, Player B has 18, and Player C has 12, then Player A discards 4 so that Players A and B have 18 and Player C has 12.

    When a player is out of cards, that player is out. As in the two-player version, this can happen either because the player wins no tricks in a round or because the player does not have enough cards to complete a war. The game is over when all but one player is out.

    [image source]

    Thursday, April 16, 2015

    How to Disregard Extremely Remote Possibilities

    In 1% Skepticism, I suggest that it's reasonable to have about a 1% credence that some radically skeptical scenario holds (e.g., this is a dream or we're in a short-term sim), sometimes making decisions that we wouldn't otherwise make based upon those small possibilities (e.g., deciding to try to fly, or choosing to read a book rather than weed when one is otherwise right on the cusp).

    But what about extremely remote possibilities with extremely large payouts? Maybe it's reasonable to have a one in 10^50 credence in the existence of a deity who would give me at least 10^50 lifetimes' worth of pleasure if I decided to raise my arms above my head right now. One in 10^50 is a very low credence, after all! But given the huge payout, if I then straightforwardly apply the expected value calculus, such remote possibilities might generally drive my decision making. That doesn't seem right!

    I see three ways to insulate my decisions from such remote possibilities without having to zero out those possibilities.

    First, symmetry:
    My credences about extremely remote possibilities appear to be approximately symmetrical and canceling. In general, I'm not inclined to think that my prospects will be particularly better or worse due to their influence on extremely unlikely deities, considered as a group, if I raise my arms than if I do not. More specificially, I can imagine a variety of unlikely deities who punish and reward actions in complementary ways -- one punishing what the other rewards and vice versa. (Similarly for other remote possibilities of huge benefit or suffering, e.g., happening to rise to an infinite Elysium if I step right rather than left.) This indifference among the specifics is partly guided by my general sense that extremely remote possibilities of this sort don't greatly diminish or enhance the expected value of such actions. I see no reason not to be guided by that general sense -- no argumentative pressure to take such asymmetries seriously in the way that there is some argumentative pressure to take dream doubt seriously.

    Second, diminishing returns:
    Bernard Williams famously thought that extreme longevity would be a tedious thing. I tend to agree instead with John Fischer that extreme longevity needn't be so bad. But it's by no means clear that 10^20 years of bliss is 10^20 times more choiceworthy than a single year of bliss. (One issue: If I achieve that bliss by repeating similar experiences over and over, forgetting that I have done so, then this is a goldfish-pool case, and it seems reasonable not to think of goldfish-pool cases as additively choiceworthy; alternatively, if I remember all 10^20 years, then I seem to have become something radically different in cognitive function than I presently am, so I might be choosing my extinction.) Similarly for bad outcomes and for extreme but instantaneous outcomes. Choiceworthiness might be very far from linear with temporal bliss-extension for such magnitudes. And as long as one's credence in remote outcomes declines sharply enough to offset increasing choiceworthiness in the outcomes, then extremely remote possibilities will not be action-guiding: a one in 10^50 credence of a utility of +/- 10^30 is negligible.

    Third, loss aversion:
    I'm loss averse rather than risk neutral. I'll take a bit of a risk to avoid a sure or almost-sure loss. And my life as I think it is, given non-skeptical realism, is the reference point from which I determine what counts as a loss. If I somehow arrived at a one in 10^50 credence in a deity who would give me 10^50 lifetimes of pleasure if I avoided chocolate for the rest of my life (or alternatively, a deity who would give me 10^50 units of pain if I didn't avoid chocolate for the rest of my life), and if there were no countervailing considerations or symmetrical chocolate-rewarding deities, then on a risk-neutral utility function, it might be rational for me to forego chocolate evermore. But foregoing chocolate would be a loss relative to my reference point; and since I'm loss averse rather than risk neutral, I might be willing to forego the possible gain (or risk the further loss) so as to avoid the almost-certain loss of life-long chocolate pleasure. Similarly, I might reasonably decline a gamble with a 99.99999% chance of death and a 0.00001% chance of 10^100 lifetimes' worth of pleasure, even bracketing diminishing returns. I might even reasonably decide that at some level of improbability -- one in 10^50? -- no finite positive or negative outcome could lead me to take a substantial almost-certain loss. And if the time and cognitive effort of sweating over decisions of this sort itself counts as a sufficient loss, then I can simply disregard any possibility where my credence is below that threshold.

    These considerations synergize: the more symmetry and the more diminishing returns, the easier it is for loss aversion to inspire disregard. Decisions at credence one in 10^50 are one thing, decisions at credence 0.1% quite another.

    Wednesday, April 15, 2015

    Dialogues on Disability

    ... a new series of interviews, by Shelley Tremain, launches today at the Discrimination and Disadvantage blog with inaugural guest Bryce Huebner.

    One interesting feature of the interview is Bryce's discussion of whether his celiac disease should be viewed as a disability. There is a broad sense in which virtually everyone is disabled -- we are nearsighted, have allergies, experience back pain, etc. Yet, given our social structures, many of these disabilities are hardly disabilities at all. If I lived in a world in which corrective lenses were inaccessible, my 20/500 nearsightedness would have a huge impact on my life. As it is, I pop on my glasses and no problem! (In fact, I'm terrific at reading tiny print that eludes most others my age.) When I was in southern China a couple years ago, I had an allergic reaction to shellfish almost every day of my visit -- the food is so pervasive in the culture that even when it's not an ingredient, some residue often gets mixed in -- but in southern California, no problem. Conversely, in some culinary cultures, Bryce's celiac disease might hardly manifest; and we might imagine cultures or subcultures where being in a wheelchair is similarly experienced as only a minor inconvenience.

    Monday, April 13, 2015

    Comment Moderation Being Implemented

    I will try to approve comments within 24 hours of submission. I'm sorry to have to do this! Eric

    Wednesday, April 08, 2015

    Blogging and Philosophical Cognition

    Yesterday or today, my blog got its three millionth pageview since its launch in 2006. (Cheers!) And at the Pacific APA last week, Nancy Cartwright celebrated "short fat tangled" arguments over "tall skinny neat" arguments. (Cheers again!)

    To see how these two ideas are related, consider this picture of Legolas and his friend Gimli Cartwright. (Note the arguments near their heads. Click to enlarge if desired.) [modified from image source]

    Legolas: tall, lean, tidy! His argument takes you straight like an arrowshot all the way from A to H! All the way from the fundamental nature of consciousness to the inevitability of Napoleon. (Yes, I'm looking at you, Georg Wilhelm Friedrich.) All the way from seven abstract Axioms to Proposition V.42, "it is because we enjoy blessedness that we are able to keep our lusts in check". (Sorry, Baruch, I wish I were more convinced.)

    Gimli: short, fat, knotty! His argument only takes you from versions of A to B. But it does it three ways, so that if one argument fails, the others remain. It does without without need of a string of possibly dubious intermediate claims. And finally, the different premises lend tangly sideways support to each other: A2 supports A1, A1 supports A3, A3 supports A2. I think of Mozi's dozen arguments for impartial concern or Sextus's many modes of skepticism.

    In areas of mathematics, tall arguments can work -- maybe the proof of Fermat's last theorem is one -- long and complicated, but apparently sound. (Not that I would be any authority.) When each step is unshakeably secure, tall arguments go through. But philosophy tends not to be like that.

    The human mind is great at determining an object's shape from its shading. The human mind is great at interpreting a stream of incoming sound as a sly dig on someone's character. The human mind is stupendously horrible at determining the soundness of philosophical arguments, and also at determining the soundness of most individual stages within philosophical arguments. Tall, skinny philosophical arguments -- this was Cartwright's point -- will almost inevitably topple.

    Individual blog posts are short. They are, I think, just about the right size for human philosophical cognition: 500-1000 words, enough to put some flesh on an idea, making it vivid (pure philosophical abstractions being almost impossible to evaluate for multiple reasons), enough to make one or maybe two novel turns or connections, but short enough that the reader can get to the end without having lost track of the path there.

    In the aggregate, blog posts are fat and tangled: Multiple posts can get at the same general conclusion from diverse angles. Multiple posts can lend sideways support to each other. I offer, as an example, my many posts skeptical of philosophical expertise (of which this is one): e.g., here, here, here, here, here, here.

    I have come to think that philosophical essays, too, often benefit from being written almost like a series of blog posts: several shortish sections, each of which can stand semi-independently and which in aggregate lead the reader in a single general direction. This has become my metaphilosophy of essay writing, exemplified in "The Crazyist Metaphysics of Mind" and "1% Skepticism".

    Of course there's also something to be said for Legolas -- for shooting your arrow at an orc halfway across the plain rather than waiting for it to reach your axe -- as long as you have a realistically low credence that you will hit the mark.

    Tuesday, March 31, 2015

    Percentages of Women on the Program of the Pacific APA

    Tomorrow I head off to the Pacific Division meeting of the American Philosophical Association in Vancouver. (Thursday I'll be presenting my critique of Quassim Cassam's Self-Knowledge for Humans. Saturday, I'll be presenting on blameworthiness for implicit attitudes.) Given my interest in professional philosophy's skewed gender ratios (e.g. here and here), I thought I'd do a rough coding of the Pacific APA main program by gender. Alongside gender, I also coded role in the program and whether the session topic is ethics (including political philosophy).

    I coded gender conservatively, declining to code names that I perceived as gender ambiguous (e.g., "Kris", "Jamie") or that I did not associate with a clear gender given my particular cultural background (most Asian names and some European names or unusual names), except when I had personal knowledge of the person's gender. As a result 13% of the names remained unclassified. In a more careful coding, I would try to get the exclusions down below 5%.

    With that caveat, I found that 275/856 (32%) of Pacific APA main program participants were women. Although this may sound low, it is substantially higher than the proportion of women in the profession overall, which is typically estimated to be in the low 20%'s in North America (e.g., here). (275/856 > 21%, two-tailed exact p < .001; even classifying all ambiguous names as men yields 28% vs. 21%, exact p < .001).

    These data can't fully be explained by recent changes in the proportion of women entering the profession: According to the Survey of Earned Doctorates, 27% of philosophy PhDs in 2013 were women (also 27% in 2012). So even if newly-minted PhDs are more likely to attend conferences, that wouldn't raise the percentage of women to 32%. Affirmative action might be playing a role -- probably other factors too. Plenty of room for speculation.

    Since it's often thought that the gender distribution is closer to equal in ethics than in other areas of philosophy, I also coded sessions as "ethics" vs. "non-ethics" vs. "excluded" (excluded sessions being topically borderline or mixed or concerning general issues in the profession). I found the expected divergence: 38% of the ethics program participants were women, compared to 28% in non-ethics (Z = 3.0, p = .003).

    Finally, I was interested to look at women's representation in different roles on the program. Some roles are much more prestigious than others: being the author of a book targeted for an author-meets-critics session is much more prestigious than chairing a session. I coded five levels of prestige:

  • 1: Author in an author-meets-critics, or award winner, or invited symposium speaker with at least one commentator focused exclusively on your work.
  • 2: Invited symposium speaker not meeting the criteria above, or "critic" at an author-meets-critics.
  • 3: Invited symposium commentator.
  • 4: Refereed colloquium speaker, or colloquium commentator.
  • 5: Session chair.
  • Excluded: APA organized sessions (e.g., on finding a community college position) and poster presentations (too few for meaningful analysis).
  • Of the people in the most prestigious roles in the program (Category 1), 13/52 (25%) are women. Although this appears to be a bit below the 32% representation of women in all other roles combined, this sample size is too small to permit any definite conclusions (one-proportion CI 14%-39%).

    In the larger group of people with fairly prestigious roles (Category 2), 59/162 (36%) are women, similar to women's overall representation in the program. The group of symposium commentators was small -- 15/44 (34%) -- but in line with the overall numbers. The proportion of women presenting (usually anonymously refereed) colloquium papers was 85/310 (27%, CI 23%-33%), and the proportion of women chairing sessions was 77/221 (35%, CI 29%-42%). Thus, I found no clear tendency for women to appear disproportionately at either a higher or lower level of prestige than men.

    Analysis of more years' data, which I hope to explore in the future, will give more power to detect smaller effect sizes, and will also allow temporal analysis, to see how representation of women in the profession has been changing over time. Ideas welcome!

    Wednesday, March 25, 2015

    "A" Is Red, "I" Is White, "X" Is Black -- Um, Why?

    This is just the kind of dorky thing I think is cool. Check out this graph of the color associations for different letters for people with grapheme-color synesthesia.

    [click on the picture for full size, if it's not showing properly]

    This is from a sample of 6588 synesthetes in the US, reported in Witthoft, Winawer, and Eagleman 2015. Presumably, they're not talking to each other. But there's a pretty good agreement that "A" is red, "X" is black, and "Y" is yellow. But you knew that already, right?

    Now some of these results seem partly explicable: "Y" is yellow, maybe, because of the word "yellow" starts with "Y". That might also work for "R" red, "B" blue, and "G" green. For "A" I think of the big red apple with the "A is for apple" posters that ubiquitously decorate kindergarten classrooms. But "O" is not particularly associated with orange in this chart, nor "W" with white. And why are "X" and "Z" black? Because we're tired because it's near the end of the alphabet and our eyelids are starting to droop doesn't seem like a good answer. (Does it?)

    You might wonder whether it's only synesthetes who have this consensus of associations, and how stable such associations are over time or between countries.

    You're in luck, then, because here's another cool chart, from Australia in 2005!

    [again, click for clearer view]

    The colored bars are synesthetic respondents and the hatched bars are non-synesthetic respondents. The patterns are similar between synesthetes and non-synesthetes, but maybe with the non-synesthetes tending toward stronger associations between the color and the initial letter of the color word. Furthermore, again "A" is red, "I" is white, and "X" and "Z" are black. US and Australian synesthetes seems to agree that "O" is white, but the Australian non-synesthetes like their "O" orange. For some reason, "D" is now brown (47%!).

    There are some older US data from the underappreciated early introspective psychologist Mary Whiton Calkins in her classic 1893 paper on synesthesia. [Pop quiz: Who are the only three people to have been president of both the American Psychological Association and the American Philosophical Association? Answer: William James, John Dewey, and Mary Whiton Calkins.] She reports that synesthetes tend to associate "I" with black and "O" with white. "O" being white matches the synesthete reports from the US and Australia in 2015 and 2005, but Calkins's black "I" is different. Calkins reports this possible explanation for the whiteness of "O", from one of her participants, seeming to find it plausible: O "= cipher = blank = sheet of white paper".

    Witthoft et al. 2015 found that almost a sixth of their participants born in the US in the late 1970s (but not those born before 1967) seem to have letter-color associations that match much better than chance with the colors of the letters of this then-popular magnet toy:

    [image source]

    Neat finding. Of course, the darned toy has "X" purple and "Z" orange, so it's all wrong!

    Brang, Rouw, Ramachandran and Coulson 2011 find a weak tendency for similarly-shaped letters to associate to similar colors in US sample. Irish-based Barnett et al. 2008 and British-based Simner et al. 2015 find broadly similar patterns to the other recent English-language populations.

    Spector and Maurer 2011 find that even pre-literate English-speaking Canadian toddlers associate "O" and "I" with white and "X" and "Z" with black, though they do not share older participants' associations of "A" with red, "B" with blue, "G" with green, and "Y" with yellow. They hypothesize that jagged shapes ("X" and "Z") might be more likely to have shaded portions in a natural environment than non-jagged shapes ("O" and "I"), and that other, later associations might be language based. However, color maps of Swiss research on German-language synesthetes (Beeli, Esslen, and Jaencke 2007) shows no such relationship (see the chart on p. 790) -- for example with more participants associating "X" with white or light gray than with black or dark gray (though Simner et al. have a German subset which do show black associations with "X" and "Z"). Beeli et al. find a weak tendency for higher frequency letters to be associated with higher saturation colors in a German-language sample. Rouw et al. 2014 found that Dutch and English-speaking non-synesthetic participants had similar associations for "A" (red), "B" (blue), "D" (brown), "E" (yellow), "I" (white), and "N" (brown). Hindi participants, with their different alphabet, had a rather different set of associations -- though the first letter of the Hindi alphabet was also associated with red. They speculate that the first letter in each alphabet gets a "signal" color.

    Okay, so now you know!

    Let me leave you then, with this highly unnatural thought:


    Thursday, March 19, 2015

    On Being Blameworthy for Unwelcome Thoughts, Reactions, and Biases

    As Aristotle notes (NE III.1, 1110a), if the wind picks you up and blows you somewhere you don't want to go, your going there is involuntary, and you shouldn't be praised or blamed for it. Generally, we don't hold people morally responsible for events outside their control. The generalization has exceptions, though. You're still blameworthy if you've irresponsibly put yourself in a position where you lack control, such as through recreational drugs or through knowingly driving a car with defective brakes.

    Spontaneous reactions and unwelcome thoughts are in some sense outside our control. Indeed, trying to vanquish them seems sometimes only to enhance them, as in the famous case of trying not to think of a pink elephant. A particularly interesting set of cases are unwelcome racist, sexist, and ableist thoughts and reactions: If you reflexively utter racist slurs silently to yourself, or if you imagine having sex with someone with whom you're supposed to be having a professional conversation, or if you feel flashes of disgust at someone's blameless disability, are you morally blameworthy for those unwelcome thoughts and reactions? Let's stipulate that you repudiate those thoughts and reactions as soon as they occur and even work to compensate for any bias.

    To help fix ideas, let's consider a hypothetical. Hemlata, let's say, lacks the kind of muscular control that most people have, so that she has a disvalued facial posture, uses a wheelchair to get around, and speaks in a way that people who don't know her find difficult to understand. Let’s also suppose that Hemlata is a sweet, competent person and a good philosopher. If the psychological literature on implicit bias is any guide, it's likely that it will be more difficult for Hemlata to get credit for intelligence and philosophical skill than it will be for otherwise similar people without her disabilities.

    Now suppose that Hemlata meets Kyle – at a meeting of the American Philosophical Association, say. Kyle’s first, uncontrolled reaction to Hemlata is disgust. But he thinks to himself that disgust is not an appropriate reaction, so he tries to suppress it. He is only partly successful: He keeps having negative emotional reactions looking at Hemlata. He doesn’t feel comfortable around her. He dislikes the sound of her voice. He feels that he should be nice to her; he tries to be nice. But it feels forced, and it’s a relief when a good excuse arises for him to leave and chat with someone else. When Hemlata makes a remark about the talk that they’ve both just seen, Kyle is less immediately disposed to see the value of the remark than he would be if he were chatting with someone non-disabled. But then Kyle thinks he should try harder to appreciate the value of Hemlata's comments, given Hemlata's disability; so he makes an effort to do so. Kyle says to Hemlata that disabled philosophers are just as capable as non-disabled philosophers, and just as interesting to speak with – maybe more interesting! – and that they deserve fully equal treatment and respect. He says this quite sincerely. He even feels it passionately as he says it. But Kyle will not be seeking out Hemlata again. He thinks he will; he resolves to. But when the time comes to think about how he wants to spend the evening, he finds a good enough reason to justify hitting the pub with someone else instead.

    Question: How should we think about Kyle?

    I propose that we give Kyle full credit for his thoughtful egalitarian judgments and intentions but also full blame for his spontaneous, uncontrolled – to some extent uncontrollable – ableism. The fact that his ableist reactions are outside of his control does not mitigate his blameworthiness for them. When the wind blows you somewhere, the fact that you ended up there does not reflect your attitudes or personality. In contrast, in Kyle's case, his ableist reactions, repudiated though they are, are partly constitutive of his attitudes and personality. Hemlata would not be wrong to find Kyle morally blameworthy for his unwelcome ableist reactions.

    Compare with the case of personality traits: Some people are more naturally sweet, some more naturally jerkish than others. Excepting bizarre or pathological cases, we praise or blame people for those dispositions without much attention to whether they worked hard to attain them or came by them easily or can't help but have them. Likewise, if you've been a spontaneous egalitarian as far back as you can remember, great! And if you've worked hard to become a thoroughgoing spontaneous egalitarian despite a strong natural tendency toward bias, also great, in a different way. And someone whose immediate reactions are so deeply, ineradicably sexist, racist, and ableist that there is no hope of ever obliterating those reactions is not thereby excused.

    This is a harder line, I think, than most philosophers take who write about blameworthiness for implicit bias (e.g., Jennifer Saul and Neil Levy).

    Part of my thought here is that words and theories and ineffective intentions are cheap. It's easy to say egalitarian things, with a feeling of sincerity. For 21st century liberals you almost have to be a contrarian not to go along with endorsing egalitarian views at an intellectual level. It seems reasonable to give ourselves some credit for that, since egalitarianism (about the right things) is good. But we take it too easy on ourselves if we think that such conscious endorsements and intentions are the main thing to which credit and blame should attach: Our spontaneous responses to people, our implicit biases, and the actual pattern of decisions we make are often not as handsome as our words and resolutions, and such things also can matter quite a bit to the people against whom we have these unwelcome thoughts, reactions, and biases. It seems a bit like excuse-making to step away from accepting full blame for that aspect of ourselves.

    (This, by the way, is the topic of the talk I'll be giving at the Pacific APA meeting, in the Group Session from 6-9 pm Saturday evening, April 4.)

    [image source]


    One compromise approach is to say that people are blameworthy only because, and to the extent, that their reactions are under their indirect control: Although Kyle now can't effectively eliminate his unwelcome reactions to Hemlata, he could earlier have engaged in a course of self-cultivation which could have reduced or eliminated his tendency toward such reactions, for example by repeatingly exposing himself to positive exemplars of disabled people. He should have taken those measures, but he didn't.

    Although I'm broadly sympathetic with that line of response, I see at least two problems with insisting that at least indirect control is necessary: First, indirect control comes in degrees. Presumably, for some people, some biases or unwelcome patterns of reaction would be fairly easily controlled if they made the effort, while for other people those same patterns might be practically impossible to eliminate; but in the ordinary course of assigning praise and blame we rarely inquire into such interpersonal differences in difficulty. Second, the full suite of unwelcome thoughts, reactions, and biases, if we consider not only sexism and racism but also the manifold versions of ableism, ageism, classism, bias based on physical attractiveness, and cultural bias, as well as the full pattern of unjustifiable angry, dismissive, insulting, and unkind thoughts we can have about people even separate from bias – well, it's so huge that a self-improvement project focused on eliminating all of them would be hopeless and arguably so time-consuming that it would squeeze out many other things that also deserve attention. We are forced to choose our targets for self-improvement. But the practical impossibility of a program of self-cultivation that eliminates all unwelcome thoughts, reactions, and biases shouldn't excuse us from being blameworthy for those thoughts, reactions, and biases that remain. Given the difficulty, it's appropriately merciful to cut people some slack – but that slack should be something like understanding and forgiveness rather than excuse from praise and blame.

    Update April 3:

    I've been getting a lot of helpful critique, both in the comments section and orally. Let me add two important qualifications:

    (1.) Pathologically obsessive thoughts probably deserve a different approach.

    (2.) The case I am most interested in is self-blame and self-critique, especially among those of us with a tendency to want to let ourselves off the hook. Secondarily, I want to affirm Hemlata's mixed reaction to Kyle (and other parallel cases). What I'm least interested in is licensing a person in a position of power to have a low opinion of others because of whatever unwelcome thoughts, reactions, and biases those others might have that the person in power might or might not have.

    Wednesday, March 11, 2015

    Perils of the Sweetheart

    Tonight in Palm Desert, I'm presenting my "Theory of Jerks (and Sweethearts)" to a general audience. (Come!) In my past work on the topic, jerks have got most of the attention. (Don't they always!) A jerk, in my definition, is someone who gives insufficient weight to (or culpably fails to respect) the perspectives of others around him, treating them as tools to be manipulated or fools to be dealt with rather than as moral and epistemic peers.

    The sweetheart is the opposite of the jerk -- someone who very highly values the perspectives of others around him.

    You might think that if being a jerk is bad, being a sweetheart is good. And I do think it's better, overall, to be a bit of a sweetheart if you can. But I'd also argue that it's possible to go too far toward the sweetheart side, overvaluing, or giving excessive weight to, the perspectives of others around you.

    I see three moral and epistemic perils in being too much of a sweetheart.

    First peril: The sweetheart risks being so attuned to others’ goals and interests that he is captured by them, losing track of his own priorities. Consider the person who never says “no” to others – who spends his whole day helping everyone else get their own things done, leaving insufficient time to relax or to satisfy his own long-term goals. The sweetheart might forget that he can also sometimes make his own demands. Sometimes you need to disappoint people. In the extreme, the sweetheart’s complicity in this arrangement becomes in fact a kind of moral failure – a failure of moral duty to a certain person who counts, who ought to be respected, who ought to be cut some slack and given a chance to flourish and discover independent ideals – I’m speaking here, of course, of the duties the sweetheart has to himself.

    Second peril: Because the sweetheart has so much respect for the opinions of other people who might disagree with him, he can have trouble achieving sufficient intellectual independence. This is part of the reason that visionary moralists are often not sweethearts. The perfect sweetheart hates disagreeing with others, hates taking controversial stands, prefers the compromise position in which everyone gets to be at least partly right. But everyone is not always partly right. Southerners oppressing black people were not partly right. Physically abusive alcoholic husbands are not partly right. Some people need to be fought against, and the purest sweethearts tend not to have much stomach for the fight. Also, some people, even if not morally wrong, are just factually wrong, and sometimes we need a clear, confident, disagreeable voice to see this.

    Third peril: To the extent being a jerk or sweetheart turns on how you react to the people around you, being too much of a sweetheart means risking being too captured by the perspectives of whoever happens to be around you – without, perhaps, enough counterbalancing weight on the interests and perspectives of more distant people. The homeless person right here in front of you might compel you so much that you set wrongly aside other obligations so that you can help her, or you give her money that would be more wisely and effectively given to (say) Oxfam. When you’re with your friends who are liberal you find yourself agreeing with all their liberal positions; when you’re with your friends who are conservative you find yourself agreeing with all their conservative positions. You are blown about by the winds.

    If you know the cartoon SpongeBob SquarePants, the humor and conflict in the show often derives from SpongeBob's excessive sweetness in these three ways.

    I’m not sure there’s a perfect Aristotelian golden mean here: an ideal spot on the spectrum from jerk to sweetheart. Maybe there’s one best way to be – partway toward the sweet side perhaps, but not all the way to doormat – but I’m more inclined to think that perfection is not even a conceivable thing, that one can’t be wholly true to oneself without sinning against others, that one can’t wholly satisfy the legitimate demands of others without sinning against oneself; that everyone is thus deficient in some ways.

    Furthermore, when we try to correct, often we don’t even know what direction to go in. It’s characteristic of the sweetheart to worry that he has been too harsh or insistent when in fact what he really needs is to be more comfortable standing up for himself; it’s characteristic of the jerk to regret moments of softness and compromise.

    (image source)

    Thursday, March 05, 2015

    Zhuangzi's Delightful Inconsistency about Death

    I've been working on a new paper on ancient Chinese philosophy, "Death and Self in the Incomprehensible Zhuangzi" (come hear it Saturday at Pitzer College, if you like). In it, I argue that Zhuangzi has inconsistent views about death, but that that inconsistency is a good thing that fits nicely with his overall philosophical approach.

    Most commentators, understandably, try to give Zhuangzi -- the Zhuangzi of the authentic "Inner Chapters" at least -- a self-consistent view. Of course! This is only charitable, you might think. And this is what we almost always try to do with philosophers we respect.

    There are two reasons not to take this approach to Zhuangzi.

    First, Zhuangzi seems to think that philosophical theorizing is always defective, that language always fails us when we try to force rigid distinctions upon it, and that logical reasoning collapses into paradox when pushed to its natural end (see especially Ch. 2). Thus, you might think that Zhuangzi should want to resist committing to any final, self-consistent philosophical theory.

    Second, Zhuangzi employs a variety of devices that seem intended to frustrate the reader's natural desire to make consistent sense of his work, including: stating patent absurdities with a seeming straight face; putting his words in the mouths of various dubious-seeming sources; using humor, parable, and parody; and immediately challenging or contradicting his own assertions.

    Thus, I think we can't interpret Zhuangzi in the way we'd interpret most other philosophers: He is not, I think, offering us the One Correct Theory or the Philosophical Truth. His task is different, more subtle, more about jostling us out of our usual habits and complacent confidence, while pushing us in certain broad directions.

    Given the brevity of the text, his comments about longevity and death are strikingly frequent. In my view, they exemplify his self-inconsistency in a fun and striking way. I see three strands:

    (1.) Living out your full span of years is better than dying young. For example, Zhuangzi appears to advocate that you "live out all your natural years without being cut down halfway" (Ziporyn trans., p. 39). He celebrates trees that are big and useless and thus never chopped down (p. 8, 30-31). He seems to prefer the useless yak who can't catch rats to the weasel who can and who therefore hurries about, dying in a snare (p. 8). He seems to think it a bad outcome to be killed by a tyrant (p. 25, p. 29-30) or to die because well-meaning friends have drilled holes in your head (p. 54). A butcher so skillful in carving oxen that his blade is still as sharp as if straight from the whetstone is described as knowing "how to nourish life" (p. 23).

    (2.) Living out your full span of years is not better than dying young. In seemingly more radical moments, Zhuangzi says that although the sage likes growing old, the sage also likes dying young (p. 43), that the "Genuine Human Beings of old understood nothing about delighting in being alive or hating death. They emerged without delight, submerged again without resistance" (p. 40). He seems to admire groups of friends who are not at all distressed by each others' deaths, who "look upon life as a dangling wart or a swollen pimple, and on death as its dropping off, its bursting and draining" (p. 46-47). Of "early death, old age, the beginning, the end", the sage sees "each of them as good" (p. 43).

    (3.) We don't know whether living out your full span of years is better than dying young. This view fits with the general skepticism Zhuangzi expresses in Chapter 2. It doesn't have as broad a base of direct textual support, but there is one striking passage to this effect:

    How, then, do I know that delighting in life is not a delusion? How do I know that in hating death I am not like an orphan who left home in youth and no longer knows the way back? Lady Li was a daughter of the border guard of Ai. When she was first captured and brought to Qin, she wept until tears drenched her collar. But when she got to the palace, sharing the king's luxurious bed and feasting on the finest means, she regretted her tears. How do I know the dead don't regret the way they used to cling to life?" (p. 19).
    You could try to reconcile these various strands into a consistent view. For example you could say that they are targeted to readers of different levels of enlightenment (Allinson), or maybe they reflect different phases of Zhuangzi's intellectual development (possibly Graham), or you might think try to explain away one or the other strand: Maybe he really values death as much as he values life, as part of the infinite series of changes that is life-and-death (possibly Ames or Fraser), or you might think that Zhuangzi's view is that it's only remote "sages" who are lacking something important who are unmoved by death (Olberding). But each of these interpretations has substantial weaknesses, if intended as a means by which to reconcile the text into a self-consistent unity.

    [revision 6:40 pm: These statements are too compressed to be entirely accurate to these scholars' views and Olberding in particular suggests that in the course of personal mourning (outside the Inner Chapters) Zhuangzi seems to have a shifting attitude.]

    My own approach is to allow Zhuangzi to be inconsistent, since there's textual evidence that Zhuangzi is not trying to present a single, self-consistent philosophical theory. If Zhuangzi thinks that philosophical theorizing is always inadequate in our small human hands, then he might prefer to philosophize in a fragmented, shard-like way, expressing a variety of different, conflicting perspectives on the world. He might wish to frustrate, rather than encourage, our attempts to make neat sense of him, inviting us to mature as philosophers not by discovering the proper set of right and wrong views, but rather by offering his hand as he takes his smiling plunge into confusion and doubt.

    That delightfully inconsistent Zhuangzi is the one I love -- the Zhuangzi who openly shares his shifting ideas and confusions, rather than the Zhuangzi that most others seem to see, who has some stable, consistent theory underneath that for some reason he chooses not to display in plain language on the surface of the text.

    Related posts:
    Skill and Disability in Zhuangzi (Sep. 10, 2014)
    Zhuangzi, Big and Useless -- and Not So Good at Catching Rats (Dec. 19, 2008)
    The Humor of Zhuangzi; the Self-Seriousness of Laozi (Apr. 8, 2013)
    [image source]

    Update April 23:

    A full length draft is now up on my website.

    Wednesday, February 25, 2015

    Depressive Thinking Styles and Philosophy

    Recently I read two interesting pieces that I'd like to connect with each other. One is Peter Railton's Dewey Lecture to the American Philosophical Association, in which he describes his history of depression. The other is Oliver Sacks's New York Times column about facing his own imminent death.

    One of the inspiring things about Sacks's work is that he shows how people with (usually neurological) disabilities can lead productive, interesting, happy lives incorporating their disabilities and often even turning aspects of those disabilities into assets. (In his recent column, Sacks relates how imminent death has helped give him focus and perspective.) It has also always struck me that depression -- not only major, clinical depression but perhaps even more so subclinical depressive thinking styles -- is common among philosophers. (For an informal poll, see Leiter's latest.) I wonder if this prevalence of depression among philosophers is non-accidental. I wonder whether perhaps the thinking styles characteristic of mild depression can become, Sacks-style, an asset for one's work as a philosopher.

    Here's the thought (suggested to me first by John Fischer): Among the non-depressed, there's a tendency toward glib self-confidence in one's theoretical views. (On positive illusions in general among the non-depressed see this classic article.) Normally, conscious human reasoning works like this: First, you find yourself intuitively drawn to Position A. Second, you rummage around for some seemingly good argument or consideration in favor of Position A. Finally, you relax into the comfortable feeling that you've got it figured out. No need to think more about it! (See Kahneman, Haidt, etc.)

    Depressive thinking styles are, perhaps, the opposite of this blithe and easy self-confidence. People with mild depression will tend, I suspect, to be less easily satisfied with their first thought, at least on matters of importance to them. Before taking a public stand, they might spend more time imagining critics attacking Position A, and how they might respond. Inclined toward self-doubt, they might be more likely to check and recheck their arguments with anxious care, more carefully weigh up the pros and cons, worry that their initial impressions are off-base or too simple, discard the less-than-perfect, worry that there are important objections that they haven't yet considered. Although one needn't be inclined toward depression to reflect in this manner, I suspect that this self-doubting style will tend to come more naturally to those with mild to moderate depressive tendencies, deepening their thought about the topic at hand.

    I don't want to downplay the seriousness of depression, its often negative consequences for one's life including often for one's academic career, and the counterproductive nature of repetitive dysphoric rumination (see here and here), which is probably a different cognitive process than the kind of self-critical reflection that I'm hypothesizing here to be its correlate and cousin. [Update, Feb. 26: I want to emphasize the qualifications of that previous sentence. I am not endorsing the counterproductive thinking styles of severe, acute depression. See also Dirk Koppelberg's comment below and my reply.] However, I do suspect that mildly depressive thinking styles can be recruited toward philosophical goals and, if managed correctly, can fit into, and even benefit, one's philosophical work. And among academic disciplines, philosophy in particular might be well-suited for people who tend toward this style of thought, since philosophy seems to be proportionately less demanding than many other disciplines in tasks that benefit from confident, high-energy extraversion (such as laboratory management and people skills) and proportionately more demanding of careful consideration of the pros and cons of complex, abstract arguments and of precise ways of formulating positions to shield them from critique.

    Related posts:
    Depression and Philosophy (July 28, 2006)
    SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (August 14, 2014)

    Update April 23:

    The full-length circulating draft is now up on my academic website.

    Thursday, February 19, 2015

    Why I Deny (Strong Versions of) Descriptive Cultural Moral Relativism

    Cultural moral relativism is the view that what is morally right and wrong varies between cultures. According to normative cultural moral relativism, what varies between cultures is what really is morally right and wrong (e.g., in some cultures, slavery is genuinely permissible, in other cultures it isn't). According to descriptive cultural moral relativism, what varies is what people in different cultures think is right and wrong (e.g., in some cultures people think slavery is fine, in others they don't; but the position is neutral on whether slavery really is fine in the cultures that think it is). A strong version of descriptive cultural moral relativism holds that cultures vary radically in what they regard as morally right and wrong.

    A case can be made for strong descriptive cultural moral relativism. Some cultures appear to regard aggressive warfare and genocide as among the highest moral accomplishments (consider the book of Joshua in the Old Testament); others (ours) think aggressive warfare and genocide are possibly the greatest moral wrongs of all. Some cultures celebrate slavery and revenge killing; others reject those things. Some cultures think blasphemy punishable by death; others take a more liberal attitude. Cultures vary enormously on womens' rights and obligations.

    However, I reject this view. My experience with ancient Chinese philosophy is the central reason.

    Here are the first passages of the Analects of Confucius (Slingerland trans., 2003):

    1.1. The Master said, "To learn and then have occasion to practice what you have learned -- is this not satisfying? To have friends arrive from afar -- is this not a joy? To be patient even when others do not understand -- is this not the mark of the gentleman?"
    1.2. Master You said, "A young person who is filial and respectful of his elders rarely becomes the kind of person who is inclined to defy his superiors, and there has never been a case of one who is disinclined to defy his superiors stirring up rebellion. The gentleman applies himself to the roots. 'Once the roots are firmly established, the Way will grow.' Might we not say that filial piety and respect for elders constitute the root of Goodness?"
    1.3. The Master said, "A clever tongue and fine appearance are rarely signs of Goodness."
    1.4. Master Zeng said, "Every day I examine myself on three counts: in my dealings with others, have I in any way failed to be dutiful? In my interactions with friends and associates, have I in any way failed to be trustworthy? Finally, have I in any way failed to repeatedly put into practice what I teach?"
    No substantial written philosophical tradition is culturally farther from the 21st century United States than is ancient China. And yet, while we might not personally endorse these particular doctrines, they are not alien. It is not difficult to enter into the moral perspective of the Analects, finding it familiar, comprehensible, different in detail and emphasis, but at the same time homey. Some people react to the text as kind of "fortune cookie": full of boring and trite -- that is, familiar! -- moral advice. (I think this underestimates the text, but the commonness of the reaction is what interests me.) Confucius does not advocate the slaughter of babies for fun, nor being honest only when the wind is from the east, nor severing limbs based on the roll of dice. 21st century U.S. undergraduates might not understand the text's depths but they are not baffled by it as they would be by a moral system that was just a random assortment of recommendations and prohibitions.

    You might think, "of course there would be some similarities!" The ancient Confucians were human beings, after all, with certain natural reactions and who needed to live in a not-totally-chaotic social system. Right! But then, of course, this is already to step away from the most radical form of descriptive cultural moral relativism.

    Still, you might say, the Analects is pretty morally different -- the Confucian emphasis on being "filial", for example -- that's not really a big piece of U.S. culture. It's an important way in which the moral stance of the ancient Chinese differs from ours.

    This response, I think, underestimates two things.

    First, it underestimates the extent to which people in the U.S. do regard it as a moral ideal to care for and respect their parents. The word "filial" is not a prominent part of our vocabulary, but this doesn't imply that attachment to and concern for our parents is minor.

    Second, and more importantly, it underestimates the diversity of opinion in ancient China. The Analects is generally regarded as the first full-length philosophical text. The second full-length text is the Mozi. Mozi argues vehemently against the Confucian ideal of treating one's parents with special concern. Mozi argues that we should have equal concern for all people, and no more concern for one's parents than for anyone else's parents. Loyalty to one's state and prince he also rejects, as objectionably "partial". One's moral emphasis should be on ensuring that everyone has their basic necessities met -- food, shelter, clothing, and the like. Whereas Confucius is a traditionalist who sees the social hierarchy as central to moral life, Mozi is a radical, cosmopolitan, populist consequentialist!

    And of course, Daoism is another famous moral outlook that traces back to ancient China -- one that downplays social obligation to others and celebrates harmonious responsiveness to nature -- quite different again from Confucianism and Mohism.

    Comparing ancient China and the 21st century U.S., I see greater differences in moral outlook within each culture than I see between the cultures. With some differences in emphasis and in culturally specific manifestations, a similar range of outlooks flourishes in both places. (This would probably be even more evident if we had more than seven full-length philosophical texts from ancient China.)

    So what about slavery, aggressive warfare, women's rights, and the rest? Here's my wager: If you look closely at cultures that seem to differ from ours in those respects, you will see a variety of opinions on those issues, not a monolithic foreignness. Some slaves (and non-slaves) presumably abhor slavery; some women (and non-women) presumably reject traditional gender roles; every culture will have pacifists who despise military conquest; etc. And within the U.S., probably with the exception of slavery traditionally defined, there still is a pretty wide range of opinion about such matters, especially outside mainstream academic circles.

    [image source]

    Wednesday, February 11, 2015

    The Intrinsic Value of Self-Knowledge

    April 2, I'll be a critic at a Pacific APA author-meets-critics session on Quassim Cassam's book Self-Knowledge for Humans. (Come!) In the last chapter of the book, Cassam argues that self-knowledge is not intrinsically valuable. It's only, he says, derivatively or instrumentally valuable -- valuable to the extent it helps deliver something else, like happiness.

    I disagree. Self-knowledge is intrinsically valuable! It's valuable even if it doesn't advance some other project, valuable even if it doesn't increase our happiness. Cassam defends his view by objecting to three possible arguments for the intrinsic value of self-knowledge. I'll spot him those objections. Here are three other ways to argue for the intrinsic value of self-knowledge.

    1. The Argument from Addition and Subtraction.

    Here's what I want you to do: Imaginatively subtract our self-knowledge from the world while keeping everything else as constant as possible, especially our happiness or subjective sense of well-being. Now ask yourself: Is something valuable missing?

    Now imaginatively add lots of self-knowledge to the world while keeping everything else as constant as possible. Now ask: Has something valuable been gained?

    Okay, I see two big problems with this method of philosophical discovery. Both problems are real, but they can be partly addressed.

    Problem 1: The subtraction and addition are too vague to imagine. To do it right, you need to get into details, and the details are going to be tricky.

    Reply 1: Fair enough! But still: We can give it a try and take our best guess where it's leading. Suppose I suddenly knew more about why I'm drawn to philosophy. Wouldn't that be good, independent of further consequences? Or subtract: I think of myself as a middling extravert. Suppose I lose this knowledge. Stipulate again: To the extent possible, no practical consequences. Wouldn't something valuable be lost?

    Alternatively, consider an alien culture on the far side of the galaxy. What would I wish for it? Would I wish for a culture of happy beings with no self-knowledge? Or, if I imaginatively added substantial self-knowledge to this culture, would I be imagining a better state of affairs in the universe? I think the latter.

    Contrast with a case where addition and subtraction leave us cold: seas of iron in the planet's core. Unless there are effects on the planetary inhabitants, I don't care. Add or subtract away, whatever.

    Problem 2: What these exercises reveal is only that I regard self-knowledge as something that has intrinsic value. You might differ. You might think: happy aliens, no self-knowledge, great! They're not missing anything important. You might think that unless some practical purpose is served by knowing your personality, you might as well not know.

    Reply 2: This is just the methodological problem that's at the root of all value inquiries. I can't rationally compel you to share my view, if you start far enough away in value space. I can just invite you to consider how your own values fit together, suggest that if you think about it, you'll find you already do share these values with me, more or less.

    2. The Argument from Nearby Cases.

    Suppose you agree that knowledge in general is intrinsically valuable. A world of unreflective bliss would lack something important that a world of bliss plus knowledge would possess. I want my alien world to be a world with inhabitants who know things, not just a bunch of ecstatic oysters.

    Might self-knowledge be an exception to the general rule? Here's one reason to think not: Knowledge of the motivations and values and attitudes of your friends and family, specifically, is intrinsically good. Set this up with an Argument from Subtraction: Subtracting from the world people's psychological knowledge of people intimate to them would make the world a worse place. Now do the Nearby Cases step: You yourself are one of those people intimate to you! It would be weird if psychological knowledge of your friends were valuable but psychological knowledge of yourself were not.

    Unless you're a hedonist -- and few people, when they really think about it, are -- you probably thinks that there's some intrinsic value in the rich flourishing of human intellectual and artistic capacities. It seems natural to suppose that self-knowledge would be an important part of that general flourishing.

    3. The Argument from Identity.

    Another way to argue that something has intrinsic value is to argue that it is in fact identical to something that we already agree has intrinsic value.

    So what is self-knowledge? On my dispositional view (see here and here), to know some psychological fact about yourself is to possess a suite of dispositions or capacities with respect to your own psychology. An example:

    What is it to know you're an extravert? It's in part the capacity to say, truly and sincerely, "I'm an extravert". It's in part the capacity to respond appropriately to party invitations, by accepting them in anticipation of the good time you'll have. It's in part to be unsurprised to find yourself smiling and laughing in the crowd. It's in part to be disposed to conclude that someone in the room is an extravert. Etc.

    My thought is: Those kinds of dispositions or capacities are intrinsically valuable, central to living a rich, meaningful life. If we subtract them away, we impoverish ourselves. Human life wouldn't be the same without this kind of self-attunement or structured responsiveness to psychological facts about ourselves, even if we might experience as much pleasure. And self-knowledge is not just some further thing floating free of those dispositional patterns that could be subtracted without taking them away too. Self-knowledge isn't some independent representational entity contingently connected to those patterns; it is those patterns.

    You might notice that this third argument creates some problems for the straightforward application of the Argument from Addition and Subtraction. Maybe in trying to imagine subtracting self-knowledge from the world while holding all else constant to the extent possible, you were imagining or trying to imagine holding constant all those dispositions I just mentioned, like the capacity to say yes appropriately to party invitations. If my view of knowledge is correct, you can't do that. What this shows is that the Argument from Addition and Subtraction isn't as straightforward as it might at first seem. It needs careful handling. But that doesn't mean it's a bad argument.


    I'd go so far as to say this: Self-knowledge, when we have it (which, I agree with Cassam, is less commonly than we tend to think), is one of the most intrinsically valuable things in human life. The world is a richer place because pieces of it can gaze knowledgeably upon themselves and the others around them.

    Tuesday, February 03, 2015

    How Robots and Monsters Might Break Human Moral Systems

    Human moral systems are designed, or evolve and grow, with human beings in mind. So maybe it shouldn't be too surprising if they would break apart into confusion and contradiction if radically different intelligences enter the scene.

    This, I think, is the common element in Scott Bakker's and Peter Hankins's insightful responses to my January posts on robot or AI rights. (All the posts also contain interesting comments threads, e.g., by Sergio Graziosi.) Scott emphasizes that our sense of blameworthiness (and other intentional concepts) seems to depend on remaining ignorant of the physical operations that make our behavior inevitable; we, or AIs, might someday lose this ignorance. Peter emphasizes that moral blame requires moral agents to have a kind of personal identity over time which robots might not possess.

    My own emphasis would be this: Our moral systems, whether deontological, consequentialist, virtue ethical, or relatively untheorized and intuitive, take as a background assumption that the moral community is composed of stably distinct individuals with roughly equal cognitive and emotional capacities (with special provisions for non-human animals, human infants, and people with severe mental disabilities). If this assumption is suspended, moral thinking goes haywire.

    One problem case is Robert Nozick's utility monster, a being who experiences vastly more pleasure from eating cookies than we do. On pleasure-maximizing views of morality, it seems -- unintuitively -- that we should give all our cookies to the monster. If it someday becomes possible to produce robots capable of superhuman pleasure, some moral systems might recommend that we impoverish, or even torture, ourselves for their benefit. I suspect we will continue to find this unintuitive unless we radically revise our moral beliefs.

    Systems of inviolable individual rights might offer an appealing answer to such cases. But they seem vulnerable to another set of problem cases: fission/fusion monsters. (Update Feb. 4: See also Briggs & Nolan forthcoming). Fission/fusion monsters can divide into separate individuals at will (or via some external trigger) and then merge back into a single individual later, with memories from all the previous lives. (David Brin's Kiln People is a science fiction example of this.) A monster might fission into a million individuals, claiming rights for each (one vote each, one cookie from the dole), then optionally reconvene into a single highly-benefited individual later. Again, I think, our theories and intuitions start to break. One presupposition behind principles of equal rights is that we can count up rights-deserving individuals who are stable over time. Challenges could also arise from semi-separate individuals: AI systems with overlapping parts.

    If genuinely conscious human-grade artificial intelligence becomes possible, I don't see why a wide variety of strange "monsters" wouldn't also become possible; and I see no reason to suppose that our existing moral intuitions and moral theories could handle such cases without radical revision. All our moral theories are, I suggest, in this sense provincial.

    I'm inclined to think -- with Sergio in his comments on Peter's post -- that we should view this as a challenge and occasion for perspective rather than as a catastrophe.

    [HT Norman Nason; image source]

    Monday, February 02, 2015

    Brief Interview at The Magazine of Fantasy and Science Fiction

    ... here, about my story "Out of the Jar", which features a philosophy professor who discovers he's a sim running in the computer of a sadistic teenager.

    Friday, January 30, 2015

    Not quite up to doing a blog post this week, after the death of my father on the 18th. Instead, I post this picture of a highly energy efficient device in outer space:

    Related post: Memories of my father.

    Friday, January 23, 2015

    Memories of My Father

    My father, Kirkland R. Gable (born Ralph Schwitzgebel) died Sunday. Here are some things I want you to know about him.

    Of teaching, he said that authentic education is less about textbooks, exams, and technical skills than about moving students "toward a bolder comprehension of what the world and themselves might become." He was a beloved psychology professor at California Lutheran University.

    I have never known anyone, I think, who brought as much creative fun to teaching as he did. He gave out goofy prizes to students who scored well on his exams (e.g., a wind-up robot nun who breathed sparks of static electricity: "nunzilla"). Teaching about alcoholism, he would start by pouring himself a glass of wine (actually, water with food coloring), pouring more wine and acting drunker, arguing with himself, as the class proceeded. Teaching about child development, he would bring in my sister or me, and we would move our mouths like ventriloquist dummies as he stood behind us, talking about Piaget or parenting styles (and then he'd ask our opinion about parenting styles). Teaching about neuroanatomy, he brought in a brain jello mold, which he sliced up and passed around class for the students to eat ("yum! occipital cortex!"). Etc.

    As a graduate student and then assistant professor at Harvard in the 1960s and 1970s, he shared the idealism of his mentors Timothy Leary and B.F. Skinner, who thought that through understanding the human mind we can transform and radically improve the human condition -- a vision he carried through his entire life.

    His comments about education captured his ideal for thinking in general: that we should always aim toward a bolder comprehension of what the world and we ourselves, and the people around us, might become.

    He was always imagining the potential of the young people he met, seeing things in them that they often did not see in themselves. He especially loved juvenile delinquents, whom he encouraged to think expansively and boldly. He recruited them from street corners, paying them to speak their hopes and stories into reel-to-reel tapes, and he recorded their declining rates of recidivism as they did this, week after week. His book about this work, Streetcorner Research (1964), was a classic in its day. As a prospective graduate student in the 1990s, I proudly searched the research libraries at the schools I was admitted to, always finding multiple copies with lots of date stamps in the 1960s and 1970s.

    With his twin brother Robert, he invented the electronic monitoring ankle bracelet, now used as an alternative to prison for non-violent offenders.

    He wanted to set teenage boys free from prison, rewarding them for going to churches and libraries instead of street corners and pool halls. He had a positive vision rather than a penal one, and he imagined everyone someday using location monitors to share rides and to meet nearby strangers with mutual interests -- ideas which, in 1960, seem to have been about fifty years before their time.

    With degrees in both law and psychology, he helped to reform institutional practice in insane asylums -- which were often terrible places in the 1960s, whose inmates had no effective legal rights. He helped force these institutions to become more humane and to release harmless inmates held against their will. I recall his stories about inmates who were often, he said, "as sane as could be expected, given their current environment", and maybe saner than their jailors -- for example an old man who decades earlier had painted his neighbor's horse as an angry prank, and thought he'd "get off easy" if he convinced the court he was insane.

    As a father, he modeled and rewarded unconventional thinking. We never had an ordinary Christmas tree that I recall -- always instead a cardboard Christmas Buddha (with blue lights poking through his eyes), or a stepladder painted green, or a wild-found tumbleweed carefully flocked and tinseled -- and why does it have to be on December 25th? I remember a few Saturdays when we got hamburgers from different restaurants and ate them in a neutral location -- I believe it was the parking lot of a Korean church -- to see which burger we really preferred. (As I recall, my sister and he settled on the Burger King Whopper, while I could never confidently reach a preference, because it seemed like we never got the methodology quite right.)

    He loved to speak with strangers, spreading his warm silliness and unconventionality out into the world. If we ordered chicken at a restaurant, he might politely ask the server to "hold the feathers". Near the end of his life, if we went to a bank together he might gently make fun of himself, saying something like "I brought along my brain," here gesturing toward me with open hands, "since my other brain is sometimes forgetting things now". For years, though we lived nowhere near any farm, we had a sign from the Department of Agriculture on our refrigerator sternly warning us never to feed table scraps to hogs.

    I miss him painfully, and I hope that I can live up to some of the potential he so generously saw in me, carrying forward some of his spirit.


    I am eager to hear stories about his life from people he knew, so please, if you knew him, add one story (or more!) as a comment below. (Future visitors from 2018 or whenever, still post!) Stories are also being collected on his Facebook wall.

    We are planning a memorial celebration for him in July to which anyone who knew him would be welcome to come. Please email me for details if you're interested.

    Friday, January 16, 2015

    Two Arguments for AI (or Robot) Rights: The No-Relevant-Difference Argument and the Simulation Argument

    Wednesday, I argued that artificial intelligences created by us might deserve more moral consideration from us than do arbitrarily-chosen human strangers (assuming that the AIs are conscious and have human-like general intelligence and emotional range), since we will be partly responsible for their existence and character.

    In that post, I assumed that such artificial intelligences would deserve at least some moral consideration (maybe more, maybe less, but at least some). Eric Steinhart has pressed me to defend that assumption. Why think that such AIs would have any rights?

    First, two clarifications:

    • (1.) I speak of "rights", but the language can be weakened to accommodate views on which beings can deserve moral consideration without having rights.
    • (2.) AI rights is probably a better phrase than robot rights, since similar issues arise for non-robotic AIs, including oracles (who can speak but have no bodily robotic features like arms) and sims (who have simulated bodies that interact with artificial, simulated environments).

    Now, two arguments.


    The No-Relevant-Difference Argument

    Assume that all normal human beings have rights. Assume that both bacteria and ordinary personal computers in 2015 lack rights. Presumably, the reason bacteria and ordinary PCs lack rights is that there is some important difference between them and us. For example, bacteria and ordinary PCs (presumably) lack the capacity for pleasure or pain, and maybe rights only attach to beings with the capacity for pleasure or pain. Also, bacteria and PCs lack cognitive sophistication, and maybe rights only attach to beings with sufficient cognitive sophistication (or with the potential to develop such sophistication, or belonging to a group whose normal members are sophisticated). The challenge, for someone who would deny AI rights, would be to find a relevant difference which grounds the denial of rights.

    The defender of AI rights has some flexibility here. Offered a putative relevant difference, the defender of AI rights can either argue that that difference is irrelevant, or she can concede that it is relevant but argue that some AIs could have it and thus that at least those AIs would have rights.

    What are some candidate relevant differences?

    (A.) AIs are not human, one might argue; and only human beings have rights. If we regard "human" as a biological category term, then indeed AIs would not be human (excepting, maybe, artificially grown humans), but it's not clear why humanity in the biological sense should be required for rights. Many people think that non-human animals (apes, dogs) have rights. Even if you don't think that, you might think that friendly, intelligent space aliens, if they existed, could have rights. Or consider a variant of Blade Runner: There are non-humans among the humans, indistinguishable from outside, and almost indistinguishable in their internal psychology as well. You don't know which of your neighbors are human; you don't even know if you are human. We run a DNA test. You fail. It seems odious, now, to deny you all your rights on those grounds. It's not clear why biological humanity should be required for the possession of rights.

    (B.) AIs are created by us for our purposes, and somehow this fact about their creation deprives them of rights. It's unclear, though, why being created would deprive a being of rights. Children are (in a very different way!) created by us for our purposes -- maybe even sometimes created mainly with their potential as cheap farm labor in mind -- but that doesn't deprive them of rights. Maybe God created us, with some purpose in mind; that wouldn't deprive us of rights. A created being owes a debt to its creator, perhaps, but owing a debt is not the same as lacking rights. (In Wednesday's post, I argued that in fact as creators we might have greater moral obligations to our creations than we would to strangers.)

    (C.) AIs are not members of our moral community, and only members of our moral community have rights. I find this to be the most interesting argument. On some contractarian views of morality, we only owe moral consideration to beings with whom we share an implicit social contract. In a state of all-out war, for example, one owes no moral consideration at all to one's enemies. Arguably, were we to meet a hostile alien intelligence, we would owe it no moral consideration unless and until it began to engage with us in a socially constructive way. If we stood in that sort of warlike relation to AIs, then we might owe them no moral consideration even if they had human-level intelligence and emotional range. Two caveats on this: (1.) It requires a particular variety of contractarian moral theory, which many would dispute. And (2.) even if it succeeds, it will only exclude a certain range of possible AIs from moral consideration. Other AIs, presumably, if sufficiently human-like in their cognition and values, could enter into social contracts with us.

    Other possibly relevant differences might be proposed, but that's enough for now. Let me conclude by noting that mainstream versions of the two most dominant moral theories -- consequentialism and deontology -- don't seem to contain provisions on which it would be natural to exclude AIs from moral consideration. Many consequentialists think that morality is about maximizing pleasure, or happiness, or desire satisfaction. If AIs have normal human cognitive abilities, they will have the capacity for all these things, and so should presumably figure in the consequentialist calculus. Many deontologists think that morality involves respecting other rational beings, especially beings who are themselves capable of moral reasoning. AIs would seem to be rational beings in the relevant sense. If it proves possible to create AIs who are psychologically similar to us, those AIs wouldn't seem to differ from natural human beings in the dimensions of moral agency and patiency emphasized by these mainstream moral theories.


    The Simulation Argument

    Nick Bostrom has argued that we might be sims. That is, he has argued that we ourselves might be artificial intelligences acting in a simulated environment that is run on the computers of higher-level beings. If we allow that we might be sims, and if we know we have rights regardless of whether or not we are sims, then it follows that being a sim can't, by itself, be sufficient grounds for lacking rights. There would be at least some conceivable AIs who have rights: the sim counterparts of ourselves.

    This whole post assumes optimistic technological projections -- assumes that it is possible to create human-like AIs whose rights, or lack of rights, are worth considering. Still, you might think that robots are possible but sims are not; or you might think that although sims are possible, we can know for sure that we ourselves aren't sims. The Simulation Argument would then fail. But it's unclear what would justify either of these moves. (For more on my version of sim skepticism, see here.)

    Another reaction to the Simulation Argument might be to allow that sims have rights relative to each other, but no rights relative to the "higher level" beings who are running the sim. Thus, if we are sims, we have no rights relative to our creators -- they can treat us in any way they like without risking moral transgression -- and similarly any sims we create have no rights relative to us. This would be a version of argument (B) above, and it seems weak for the same reasons.

    One might hold that human-like sims would have rights, but not other sorts of artificial beings -- not robots or oracles. But why not? This puts us back into the No-Relevant-Difference Argument, unless we can find grounds to morally privilege sims over robots.


    I conclude that at least some artificial intelligences, if they have human-like experience, cognition, and emotion, would have at least some rights, or deserve as least some moral consideration. What range of AIs deserve moral consideration, and how much moral consideration they deserve, and under what conditions, I leave for another day.


    Related posts:

    (image source)