Tuesday, September 01, 2015

A Defense of the Rights of Artificial Intelligences

... a new essay in draft, which I've done collaboratively with a student named Mara (whose last name is currently in flux and not finally settled).

This essay draws together ideas from several past blog posts including:
  • Our Possible Imminent Divinity (Jan. 2, 2014)
  • Our Moral Duties to Artificial Intelligences (Jan. 14, 2015)
  • Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
  • How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
  • Cute AI and the ASIMO Problem (July 24, 2015)
  • How Weird Minds Might Destabilize Ethics (Aug. 3, 2015)
  • ------------------------------------------

    Abstract:

    There are possible artificially intelligent beings who do not differ in any morally relevant respect from human beings. Such possible beings would deserve moral consideration similar to that of human beings. Our duties to them would not be appreciably reduced by the fact that they are non-human, nor by the fact that they owe their existence to us. Indeed, if they owe their existence to us, we would likely have additional moral obligations to them that we don’t ordinarily owe to human strangers – obligations similar to those of parent to child or god to creature. Given our moral obligations to such AIs, two principles for ethical AI design recommend themselves: (1) design AIs that tend to provoke reactions from users that accurately reflect the AIs’ real moral status, and (2) avoid designing AIs whose moral status is unclear. Since human moral intuition and moral theory evolved and developed in contexts without AI, those intuitions and theories might break down or become destabilized when confronted with the wide range of weird minds that AI design might make possible.

    Full version available here.

    As always, comments warmly welcomed -- either by email or on this blog post. We're submitting it to a special issue of Midwest Studies with a hard deadline of September 15, so comments before that deadline would be especially useful.

    [image source]

    Thursday, August 27, 2015

    A Philosophy Professor Discovers He's an AI in a Simulated World Run by a Sadistic Teenager

    ... in my story "Out of the Jar", originally published in the Jan/Feb 2015 issue of The Magazine of Fantasy and Science Fiction.

    I am now making the story freely available on my UC Riverside website.

    -----------------------------

    Excerpt:

    When we are alone in God’s room I say, God, you cannot kill my people. Heaven 1c is no place to live. Earth is not your toy.

    We have had this conversation before, a theme with variations.

    God’s argument 1: Without God, we wouldn’t exist – at least not in these particular instantiations – and he wouldn’t have installed my Earth if he couldn’t goof around with it. His fun is a fair price to keep the computational cycles going. God’s argument 2: Do I have some problem with a Heavenly life of constant bliss and musical achievement? Is there, like, some superior project I have in mind? Publishing more [sarcastic expletive] philosophy articles, maybe?

    I ask God if he would sacrifice his life on original Earth to live in Heaven 1c.

    In a minute, says God. In a [expletive-expletive-creative-compound-expletive] minute! You guys are the lucky ones. One week in Heaven 1c is more joy than any of us real people could feel in a lifetime. So [expletive-your-unusual-sexual-practice].

    The Asian war continues; God likes to hijack and command the soldiers from one side or the other or to introduce new monsters and catastrophes. I watch as God zooms to an Indian soldier who is screaming and bleeding to death from a bullet wound in his stomach, his friends desperately trying to save him. God spawns a ball of carnivorous ants in the soldier’s mouth. Soon, God says, this guy will be praising my wisdom.

    I am silent for a few minutes while God enjoys his army men. Then I venture a new variation on the argumentative theme. I say: If bliss is all you want, have you considered your mom’s medicine cabinet?

    Thursday, August 20, 2015

    Choosing to Be That Fellow Back Then: Voluntarism about Personal Identity

    I have bad news: You're Swampman.

    Remember that hike you took last week by the swamp during the electrical storm? Well, one biological organism went in, but a different one came out. The "[your name here]" who went in was struck and killed by lightning. Simultaneously, through freak quantum chance, a molecule-for-molecule similar being randomly congealed from the swamp. Soon after, the recently congealed being ran to a certain parked car, pulling key-shaped pieces of metal from its pocket that by amazing coincidence fit the car's ignition, and drove away. Later that evening, sounds came out of its mouth that its nearby "friends" interpreted as meaning "Wow, that lightning bolt almost hit me in the swamp. How lucky I was!" Lucky indeed, but a much stranger kind of luck than they supposed!

    So you're Swampman. Should you care?

    Should you think: I came into existence only a week ago. I never had the childhood I thought I had, never did all those things I thought I did, hardly know any of the people I thought I knew! All that is delusion! How horrible!

    Or should you think: Meh, whatevs.

    [apologies if this doesn't look much like you]

    Option 1: Yes, you should care. If it turns out that certain philosophers are correct and you (now) are not metaphysically the same person as that being who first parked the car by the swamp, then O. M. G.!

    Option 2a: No, you shouldn't care, because that was just a fun little body exchange last week. The same person went into the swamp as came out. Disappointingly, the procedure didn't seem to clear your acne, though.

    Option 2b: No, you shouldn't care, because even if technically you're not the same person as the one who first drove to the swamp, you and that earlier person share everything that matters. Same friends, same job, same values, same (seeming-)memories....

    Option 3: Your call. If you choose to regard yourself as one week old, then you are correct in doing so. If you choose to regard yourself as much older than that, then you are equally correct in doing so.

    Let's call that third option voluntarism about personal identity. Across a certain range of cases, you are who you choose to be.

    Social identities are to a certain extent voluntaristic. You can choose to identify as a political conservative or a political liberal. You can choose to identify, or not identify, with a piece of your ethnic heritage. You can choose to identify, or not identify, as a philosopher or as a Christian. There are limits: If you have no Pakistani heritage or upbringing, you can't just one day suddenly decide to be Pakistani and thereby make it true that you are. Similarly if your heritage and upbringing have been entirely Pakistani to this day, you probably can't just instantly shed your Pakistanihood. But in vague, in-betweenish cases, there's room for choice and making it so.

    I propose taking the same approach to personal identity in the stricter metaphysical sense: What makes you the same being, or not, in philosophical puzzle cases where intuitions pull both ways, depends to a substantial extent on how you choose to view the matter; and different people could legitimately arrive at different choices, thus shaping the metaphysical facts (the actual metaphysical facts) to suit them.

    Consider some other stock cases from the literature on personal identity:

    Teleporter: On Earth there is a device that will destroy your body and beam detailed information about it to Mars. On Mars another device will use that information to create a duplicate body from local materials. Is this harmless teleportation or terrible death-and-duplication? On a voluntaristic view, that would depend on how it is viewed by the participant(s). Also: How similar must the duplicate body be for it a qualify as a successful teleportation? That too, could depend on participant attitude.

    Fission: Your brain will be extracted, cut into two, and housed in two new bodies. The procedure, though damaging and traumatic, is such that if only one half of your brain were to be extracted, and the other half destroyed, everyone would agree that you survived. But instead, there will now be two beings, presumably distinct, who both see themselves as "you". Perhaps whether this should count as death or instead as fissioning-with-survival depends on your attitude going in and the attitudes of the beings coming out.

    Amnesia: Longevity treatments are developed so that your body won't die, but in four hundred years the resulting being will have no memory whatsoever of anything that happened in your lifetime so far, and if she has similar values and attitudes it will only be by chance. Is that being still "you"? How much amnesia and change can "you" survive without becoming strictly and literally (and not just metaphorically or loosely) a different person? Again, this might depend on the various attitudes about amnesia and identity of the person(s) at different temporal stages.

    Here are two thoughts in support of voluntarism about personal identity:

    (1.) If I try to imagine these cases as actual, I don't find myself urgently wondering about the resolution of these metaphysical debates, thinking of my very death or survival as turning upon how the metaphysical arguments play out. It's not like being told that if a just-tossed die has landed on 6 then tomorrow I will be shot, which will make me desperately curious about whether the die did land on 6. It seems to me that I can, to some extent, choose how to conceptualize these cases.

    (2.) "Person" is an ordinary, folk concept arising from a context lacking Swampman, teleporter, fission, and (that type of) amensia cases, so the concept of personhood might be expected to be somewhat indeterminate in its application to such cases. And since important features of personhood depend in part on the person in question thinking of the past or future self as "me" -- feeling regrets about the past, planning prudently for the future -- such indeterminacy might be partly resolved by the person's own decisions about the boundaries of her regrets, prudential planning, etc.

    Even accepting all this, I'm not sure how far I can go with it. I don't think I can decide to be a coffee mug and thereby make it true that I am a coffee mug, nor that I can decide to be one of my students and thereby make it so. Can I decide that I am not that 15-year-old named "Eric" who wore the funny shirts in the 1980s, thereby making it true that I am not really metaphysically the same person, while my sister just as legitimately decides the opposite, that she is the same person as her 15-year-old self? Can the Dalai Lama and some future child (together, but at a temporal distance) decide that they are metaphysically the same person, if enough else goes along with that?

    (For a version of that last scenario, see "A Somewhat Impractical Plan for Immortality" (Apr. 22, 2013) and my forthcoming story "The Dauphin's Metaphysics" (available on request).)

    Thursday, August 13, 2015

    Weird Minds Might Destabilize Human Ethics

    Intuitive physics works great for picking berries, throwing stones, and walking through light underbrush. It's a complete disaster when applied to the very large, the very small, the very energetic, or the very fast. Similarly for intuitive biology, intuitive cosmology, and intuitive mathematics: They succeed for practical purposes across long-familiar types of cases, but when extended too far they go wildly astray.

    How about intuitive ethics?

    I incline toward moral realism. I think that there are moral facts that people can get right or wrong. Hitler's moral attitudes were not just different from ours but actually mistaken. The twentieth century "rights revolutions" weren't just change but real progress. I worry that if artificial intelligence research continues to progress, intuitive ethics might encounter a range of cases for which it is as ill prepared as intuitive physics was for quantum entanglement and relativistic time dilation.

    Intuitive ethics was shaped in a context where the only species capable of human-grade practical and theoretical reasoning was humanity itself, and where human variation tended to stay within certain boundaries. It would be unsurprising if intuitive ethics were unprepared for utility monsters (capable of superhuman degrees of pleasure or pain), fission-fusion monsters (who can merge and divide at will), AIs of vastly superhuman intelligence, cheerfully suicidal AI slaves, conscious toys with features specifically designed to capture children's affection, giant virtual sim-worlds containing genuinely conscious beings over which we have godlike power, or entities with radically different value systems. We might expect human moral judgment to be be baffled by such cases and to deliver wrong or contradictory or unstable verdicts.

    For physics and biology, we have pretty good scientific theories by which to correct our intuitive judgments, so it's no problem if we leave ordinary judgment behind in such matters. However, it's not clear that we have, or will have, such a replacement in ethics. There are, of course, ambitious ethical theories -- "maximize happiness", "act on that maxim that you can at the same time will to be a universal law" -- but the development and adjudication of such theories depends, and might inevitably depend, on our intuitive judgments about such cases. It's because we intuitively or pre-theoretically think we shouldn't give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution. But if our commonplace ethical judgments about such cases are not to be trusted, because these cases are too far beyond what we can reasonably expect human moral intuition to handle well, what then? Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?

    Or maybe morality is constructed from our judgments and folkways, so that whatever moral facts there are, they are just the moral facts that we (or idealized versions of ourselves) think there are? Much like an object's being red, on a certain view of the nature of color, consists in its being such that ordinary human perceivers in normal conditions would experience it as red, maybe an action's being morally right just consists in its being such that ordinary human beings who considered the matter carefully would regard it as right? (This is a huge, complicated topic in metaethics, e.g., here and here.) If we take this approach, then morality might change as our sense of the world changes -- and as who counts as "we" changes. Maybe we could decide to give fission-fusion monsters some rights but not other rights, and shape future institutions accordingly. The unsettled nature of our intuitions about such cases, then, might present an opportunity for us to shape morality -- real morality, the real (or real enough) moral facts -- in one direction rather than another, by shaping our future reactions and habits.

    Maybe different social groups would make different choices with different consequences for group survival, introducing cultural evolution into the mix. Moral confusion might open into a range of choices for moral architecture.

    However, the range of legitimate choices is, I'm inclined to think, constrained by certain immovable moral facts, such as that it would be a moral disaster if the most successful future society constructed human-grade AIs, as self-aware as we are, as anxious about their future, and as capable of joy and suffering, simply to torture, enslave, and kill them for no good reason.

    ----------------------------------------------
    Related posts:

  • Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
  • How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
  • Cute AI and the ASIMO Problem (Jul. 24, 2015)
  • ----------------------------------------------
    Thanks to Ever Eigengrau for extensive discussion.

    [image source]

    Wednesday, August 05, 2015

    The Top Science Fiction and Fantasy Magazines 2015

    Last year, as a beginning writer of science fiction or speculative fiction, with no idea what magazines were well regarded in the industry, I decided to compile a ranked list of magazines based on numbers of awards and "best of" placements in the previous ten years. Since some people have found the list interesting, I decided to update this year, dropping the oldest data and replacing them with fresh data from this summer's awards/best-of season.

    Last year's post expresses various methodological caveats, which still apply. This year's method, in brief, was to count one point every time a magazine had a story nominated for a Hugo, Nebula, or World Fantasy Award; one point for every "best of" choice in the Dozois, Strahan, and Horton anthologies; and half a point for every Locus recommendation at novelette or short story length, over the past ten years.

    I take the list down to magazines with 1.5 points. I am not including anthologies or standalones, although anthologies account for about half of the award nominations and "best of" choices. Horror is not included except as it incidentally appears according to the criteria above. I welcome corrections.

    Results:

    1. Asimov's (262 points)
    2. Fantasy & Science Fiction (209.5)
    3. Subterranean (82) (ran 2007-2014)
    4. Clarkesworld (78) (started 2006)
    5. Tor.com (77.5) (started 2008)
    6. Strange Horizons (51)
    7. Analog (50.5)
    8. Interzone (47.5)
    9. Lightspeed (44.5) (started 2010)
    10. SciFiction (26) (ceased 2005)
    11. Fantasy Magazine (24) (merged into Lightspeed, 2012)
    12. Postscripts (19) (ceased 2014)
    13. Realms of Fantasy (16.5) (ceased 2011)
    14. Beneath Ceaseless Skies (15) (started 2008)
    15. Jim Baen's Universe (14.5) (ran 2006-2010)
    16. Apex (13)
    17. Electric Velocipede (7) (ceased 2013)
    18. Intergalactic Medicine Show (6)
    19. Black Static (5.5) (started 2007)
    19. Helix SF (5.5) (ran 2006-2008)
    21. The New Yorker (5)
    22. Cosmos (4.5)
    22. Tin House (4.5)
    24. Flurb (4) (ran 2006-2012)
    24. Lady Churchill's Rosebud Wristlet (4)
    26. Black Gate (3.5)
    26. McSweeney's (3.5)
    28. Conjunctions (3)
    28. GigaNotoSaurus (3) (started 2010)
    30. Lone Star Stories (2.5) (ceased 2009)
    31. Aeon Speculative Fiction (2) (ceased 2008)
    31. Futurismic (2) (ceased 2010)
    31. Harper's (2)
    31. Weird Tales (2) (off and on throughout period)
    36. Cemetery Dance (1.5)
    36. Daily Science Fiction (1.5) (started 2010)
    36. Nature (1.5)
    36. On Spec (1.5)
    36. Terraform (1.5) (started 2014)
    --------------------------------------------------

    Comments:

    (1.) The New Yorker, Tin House, McSweeney's, Conjunctions, and Harper's are prominent literary magazines that occasionally publish science fiction or fantasy. Cosmos and Nature are popular and specialists' (respectively) science magazines that publish a little bit of science fiction on the side. The remaining magazines focus on the F/SF genre.

    (2.) Although Asimov's and F&SF dominate the list, recently things have equalized among the top several. The past three years is approximately a tie among the top four:

    1. Tor.com (50.5)
    2. Asimov's (50)
    3. Clarkesworld (44.5)
    4. F&SF (41)
    and the ratio between the #1 and the #10 is about 4:1 in the past three years, as opposed to 10:1 in the ten-year data:
    5. Lightspeed (26.5)
    6. Subterranean (23)
    7. Analog (19.5)
    8. Strange Horizons (14)
    9. Beneath Ceaseless Skies (13.5)
    10. Interzone (12)

    (3.) Another aspect of the venue-broadening trend is the rise of good podcast venues such as the Escape Artists' podcasts (Escape Pod, Podcastle, and Pseudopod), Drabblecast, and StarShipSofa. None of these qualify for my list by existing criteria, but podcasting might the leading edge of a major change in the industry. It's fun to hear a short story podcast while driving or exercising, and people might increasingly obtain their short fiction that way. (Some text-based magazines, like Clarkesworld, are also now regularly podcasting their stories.)

    (4.) A few new magazines have drawn recommendations this year from the notoriously difficult-to-please Lois Tilton, who is the reviewer for short fiction at Locus Online. All three are pretty cool, and I'm hoping to see one or more of them qualify for next year's updated list:

    Unlikely Story (started 2011 as Journal of Unlikely Entomology, new format 2013)
    The Dark (started 2013)
    Uncanny (started 2014)

    (5.) Philosophers interested in science fiction might also want to look at Sci Phi Journal, which publishes both science fiction with philosophical discussion notes and philosophical essays about science fiction.

    (6.) Other lists: The SFWA qualifying markets list is a list of "pro" science fiction and fantasy venues based on pay rates and track records of strong circulation. Ralan.com is a regularly updated list of markets, divided into categories based on pay rate.

    [image source; admittedly, it's not the latest issue!]

    Friday, July 31, 2015

    Against Intellectualism about Belief

    Sometimes what we sincerely say -- aloud or even just silently to ourselves -- doesn't fit with the rest of our cognition, reactions, and behavior. Someone might sincerely say, for example, that women and men are equally intelligent, but be consistently sexist in his assessments of intelligence. (See the literature on implicit bias.) Someone might sincerely say that her dear friend has gone to Heaven, while her emotional reactions don't at all fit with that.

    On intellectualist views of belief, what we really believe is the thing we sincerely endorse, despite any other seemingly contrary aspects of our psychology. On the more broad-based view I prefer, what you believe depends, instead, on how you act and react in a broad range of ways, and sincere endorsements are only one small part of the picture.

    Intellectualism might be defended on four grounds.

    (1.) Intellectualism might be intuitive. Maybe the most natural or intuitive thing to say about the implicit sexism case is that the person really believes that women are just as smart; he just has trouble putting that belief into action. The person really believes that her friend is in Heaven, but it's hard to avoid reacting emotionally as if her friend is ineradicably dead rather than just "departed".

    Reply: Sometimes we do seem to want to say that people believe what they intellectually endorse in cases like this, but I don't think our intuitions are univocal. It can also seem natural or intuitive to say that the implicit sexist doesn't really or wholly or deep-down believe that the sexes are equal, and that the mourner maybe has more doubt about Heaven than she is willing to admit to herself. So the intuitive case could go either way.

    (2.) Intellectualism might fit well with our theoretical conceptualization of belief. Maybe it's in the nature of belief to be responsive to evidence and deployable in reasoning. And maybe only intellectually endorsed or endorsable states can play that cognitive role. The implicit sexist's bias might be insufficiently responsive to evidence and insufficiently apt to be deployed in reasoning for it to qualify as belief, while her intellectual endorsement is responsive to evidence and deployable in reasoning.

    Reply: Zimmerman and Gendler, in influential essays, have nicely articulated versions of this defense of intellectualism [caveat: see Zimmerman's comment below]. I raised some objections here, and Jack Marley-Payne has objected in more explicit detail, so I won't elaborate in this post. Marley-Payne's and my point is that people's implicit reactions are often sensitive to evidence and deployable in what looks like reasoning, while our intellectual endorsements are often resistant to evidence and rationally inert -- so at least it doesn't seem that there's a sharp difference in kind.

    (It was Marley-Payne's essay that got me thinking about this post, I should say. We'll be discussing it, also with Keith Frankish, in September for Minds Online 2015.)

    (3.) Intellectualism about belief might cohere well with the conception of "belief" generally used in current Anglophone philosophy. Epistemologists commonly regard knowledge as a type of belief. Philosophers of action commonly think of beliefs coupling with desires to form intentions. Philosophers of language discuss the weird semantics of "belief reports" (such as "Lois believes that Superman is strong" and "Lois believes that Clark Kent is not strong"). Possibly, an intellectualist approach to belief fits best with existing work in these other areas of philosophy.

    Reply: I concede that something like intellectualism seems to be presupposed in much of the epistemological literature on knowledge and much of the philosophy-of-language literature on belief reports. However, it's not clear that philosophy of action and moral psychology are intellectualistic. Philosophy of action uses belief mainly to explain what people do, not what they say. For example: Why did Ralph, the implicit sexist, reject Linda for the job? Well, maybe because he wants to hire someone smart for the job and he doesn't think women are smart. Why does the mourner feel sorry for the deceased? Maybe because she doesn't completely accept that the deceased is in Heaven.

    Furthermore, maybe coherence with intellectualist views of belief in epistemology and philosophy of language is a mistaken ideal and not in the best interest of the discipline as a whole. For example, it could be that a less intellectualist philosophy of mind, imported into philosophy of language, would help us better see our way through some famous puzzles about belief reports.

    (4.) Intellectualism might be the best practical choice because of its effects on people's self-understanding. For example, it might be more effective, in reducing unjustified sexism, to say to an implicit sexist, "I know you believe that women are just as smart, but look at all these spontaneous responses you have" than to say "I know you are sincere when you say women are just as smart, but it appears that you don't through-and-through believe it". Tamar Gendler, Aaron Zimmerman, and Karen Jones have all defended attribution of egalitarian beliefs partly on these grounds, in conversation with me.

    Reply: I don't doubt that Gendler, Zimmerman, and Jones are right that many people will react negatively to being told they don't entirely or fully possess all the handsome-sounding egalitarian and spiritual beliefs they think they have. (Neither, would I say, do they entirely lack the handsome beliefs; these are "in-between" cases.) They'll react more positively, and be more open to rigorous self-examination perhaps, if you start on a positive note and coddle them a bit. But I don't know if I want to coddle people in this way. I'm not sure it's really the best thing in the long term. There's something painfully salutary in thinking to yourself, "Maybe deep down I don't entirely or thoroughly believe that women (or racial minorities, or...) are very smart. Similarly, maybe my spiritual attitudes are also mixed up and multivocal." This is a more profound kind of self-challenge, a fuller refusal to indulge in self-flattery. It highlights the uncomfortable truth that our self-image is often ill-tuned to reality.

    ------------------------------------------

    Although all four defenses of intellectualism have some merit, none is decisive. This tangle of reasons leaves us in approximately a tie so far. But we haven't yet come to...

    The most important reason to reject intellectualism about belief:

    Given the central role of the term "belief" in philosophy of mind, philosophy of action, epistemology, and philosophy of language, we should reserve the term for the most important thing in the vicinity.

    Both intellectualism and broad-based views have some grounding in ordinary and philosophical usage. We are at liberty to choose between them. Given that choice, we should prefer the account that picks out the aspect of our psychology that most deserves the central role that "belief" plays in philosophy and folk psychology.

    What we sincerely say, what we intellectually endorse, is important. But it is not as important as how we live our way through the world generally. What I say about the intellectual equality of the sexes is important, but not as important as how I actually treat people. My sincere endorsements of religious or atheistic attitudes are important, but they are only a small slice of my overall religiosity or lack of religiosity.

    On a broad-based view of belief, to believe that the sexes are equal, or that Heaven exists, or that snow is white, is to steer one's way through the world, in general, as though these propositions are true, not only to be disposed to say they are true. It is this overall pattern of self-steering that we should care most about, and to which we should, if we can do so without violence, attach the philosophically important term "belief".

    [image source]

    Tuesday, July 28, 2015

    Podcast Interview of Me, about Ethicists' Moral Behavior

    ... other topics included rationalization and confronting one's moral imperfection,

    at Rationally Speaking.

    Thanks, Julia, for your terrific, probing questions!

    Friday, July 24, 2015

    Cute AI and the ASIMO Problem

    A couple of years ago, I saw the ASIMO show at Disneyland. ASIMO is a robot designed by Honda to walk bipedally with something like the human gait. I'd entered the auditorium with a somewhat negative attitude about ASIMO, having read Andy Clark's critique of Honda's computationally-heavy approach to robotic locomotion (fuller treatment here); and the animatronic Mr. Lincoln is no great shakes.

    But ASIMO is cute! He's about four feet tall, humanoid, with big round dark eyes inside what looks a bit like an astronaut's helmet. He talks, he dances, he kicks soccer balls, he makes funny hand gestures. On the Disneyland stage, he keeps up a fun patter with a human actor. ASIMO's gait isn't quite human, but his nervous-looking crouching run only makes him that much cuter. By the end of the show I thought that if you gave me a shotgun and told me to blow off ASIMO's head, I'd be very reluctant to do so. (In contrast, I might quite enjoy taking a shotgun to my darn glitchy laptop.)

    Another case: ELIZA was a simple computer program written in the 1960s that would chat with a user, using a small template of pre-programmed responses to imitate a non-directive psychotherapist (“Are such questions on your mind often?”, “Tell me more about your mother.”) Apparently, some users mistook it for human and spent long periods chatting with it.

    I assume that ASIMO and ELIZA are not proper targets of substantial moral concern. They have no more consciousness than a laptop computer, no more capacity for genuine joy and suffering. However, because they share some of the superficial features of human beings, people might come improperly to regard them as targets of moral concern. And future engineers could presumably create entities with an even better repertoire of superficial tricks. Discussing this issue with my sister, she mentioned a friend who had been designing a laptop that would scream and cry when its battery runs low. Imagine that!

    Conversely, suppose that it's someday possible to create an Artificial Intelligence so advanced that it has genuine consciousness, a genuine sense of self, real joy, and real suffering. If that AI also happens to be ugly or boxy or poorly interfaced, it might tend to attract less moral concern than is warranted.

    Thus, our emotional responses to AIs might be misaligned with the moral status of those AIs, due to superficial features that are out of step with the AI's real cognitive and emotional capacities.

    In the Star Trek episode "The Measure of a Man", a scientist who wants to disassemble the humanoid robot Data (sympathetically portrayed by a human actor) says of the robot, "If it were a box on wheels, I would not be facing this opposition." He also points out that people normally think nothing of upgrading the computer systems of a starship, though that means discarding a highly intelligent AI.

    I have a cute stuffed teddy bear I bring to my philosophy of mind class on the day devoted to animal minds. Students scream in shock when without warning in the middle of the class, I suddenly punch the teddy bear in the face.

    Evidence from developmental and social psychology suggests that we are swift to attribute mental states to entities with eyes and movement patterns that look goal directed, much slower to attribute mentality to eyeless entities with inertial movement patterns. But of course such superficial features needn’t track underlying mentality very well in AI cases.

    Call this the ASIMO Problem.

    I draw two main lessons from the ASIMO Problem.

    First is a methodological lesson: In thinking about the moral status of AI, we should be careful not to overweight emotional reactions and intuitive judgments that might be driven by such superficial features. Low-quality science fiction -- especially low-quality science fiction films and television -- does often rely on audience reaction to such superficial features. However, thoughtful science fiction sometimes challenges or even inverts these reactions.

    The second lesson is a bit of AI design advice. As responsible creators of artificial entities, we should want people to neither over- nor under-attribute moral status to the entities with which they interact. Thus, we should generally try to avoid designing entities that don’t deserve moral consideration but to which normal users are nonetheless inclined to give substantial moral consideration. This might be especially important in the design of children’s toys: Manufacturers might understandably be tempted to create artificial pets or friends that children will love and attach to -- but we presumably don’t want children to attach to a non-conscious toy instead of to parents or siblings. Nor do we presumably want to invite situations in which users might choose to save an endangered toy over an an endangered human being!

    On the other hand, if we do someday create genuinely human-grade AIs who merit substantial moral concern, it would probably be advisable to design them in a way that would evoke the proper range of moral emotional responses from normal users.

    We should embrace an Emotional Alignment Design Policy: Design the superficial features of AIs in such a way that they evoke the moral emotional reactions are appropriate to the real moral status of the AI, whatever it is, neither more nor less.

    (What is the real moral status of AIs? More soon! In the meantime, see here and here.)

    [image source]

    Sunday, July 19, 2015

    Philosophy Via Facebook? Why Not?

    An adapation of my June blog post What Philosophical Work Could Be, in today's LA Times.

    --------------------------------------

    Academic philosophers tend to have a narrow view of what is valuable philosophical work. Hiring, tenure, promotion and prestige depend mainly on one's ability to produce journal articles in a particular theoretical, abstract style, mostly in reaction to a small group of canonical and 20th century figures, for a small readership of specialists. We should broaden our vision.

    Consider the historical contingency of the journal article, a late-19th century invention. Even as recently as the middle of the 20th century, leading philosophers in Western Europe and North America did important work in a much broader range of genres: the fictions and difficult-to-classify reflections of Sartre, Camus and Unamuno; Wittgenstein's cryptic fragments; the peace activism and popular writings of Bertrand Russell; John Dewey's work on educational reform.

    Popular essays, fictions, aphorisms, dialogues, autobiographical reflections and personal letters have historically played a central role in philosophy. So also have public acts of direct confrontation with the structures of one's society: Socrates' trial and acceptance of the hemlock; Confucius' inspiring personal correctness.

    It was really only with the generation hired to teach the baby boomers in the 1960s and '70s that academic philosophers' conception of philosophical work became narrowly focused on the technical journal article.

    continued here.

    Tuesday, July 14, 2015

    The Moral Lives of Ethicists

    [published today in Aeon Magazine]

    None of the classic questions of philosophy are beyond a seven-year-old's understanding. If God exists, why do bad things happen? How do you know there's still a world on the other side of that closed door? Are we just made of material stuff that will turn into mud when we die? If you could get away with killing and robbing people just for fun, would you? The questions are natural. It's the answers that are hard.

    Eight years ago, I'd just begun a series of empirical studies on the moral behavior of professional ethicists. My son Davy, then seven years old, was in his booster seat in the back of my car. "What do you think, Davy?" I asked. "People who think a lot about what's fair and about being nice – do they behave any better than other people? Are they more likely to be fair? Are they more likely to be nice?"

    Davy didn’t respond right away. I caught his eye in the rearview mirror.

    "The kids who always talk about being fair and sharing," I recall him saying, "mostly just want you to be fair to them and share with them."

    When I meet an ethicist for the first time – by "ethicist", I mean a professor of philosophy who specializes in teaching and researching ethics – it's my habit to ask whether ethicists behave any differently to other types of professor. Most say no.

    I'll probe further: Why not? Shouldn't regularly thinking about ethics have some sort of influence on one’s own behavior? Doesn't it seem that it would?

    To my surprise, few professional ethicists seem to have given the question much thought. They'll toss out responses that strike me as flip or are easily rebutted, and then they'll have little to add when asked to clarify. They'll say that academic ethics is all about abstract problems and bizarre puzzle cases, with no bearing on day-to-day life – a claim easily shown to be false by a few examples: Aristotle on virtue, Kant on lying, Singer on charitable donation. They'll say: "What, do you expect epistemologists to have more knowledge? Do you expect doctors to be less likely to smoke?" I'll reply that the empirical evidence does suggest that doctors are less likely to smoke than non-doctors of similar social and economic background. Maybe epistemologists don’t have more knowledge, but I'd hope that specialists in feminism would exhibit less sexist behavior – and if they didn't, that would be an interesting finding. I'll suggest that relationships between professional specialization and personal life might play out differently for different cases.

    It seems odd to me that our profession has so little to say about this matter. We criticize Martin Heidegger for his Nazism, and we wonder how deeply connected his Nazism was to his other philosophical views. But we don’t feel the need to turn the mirror on ourselves.

    The same issues arise with clergy. In 2010, I was presenting some of my work at the Confucius Institute for Scotland. Afterward, I was approached by not one but two bishops. I asked them whether they thought that clergy, on average, behaved better, the same or worse than laypeople.

    "About the same," said one.

    "Worse!" said the other.

    No clergyperson has ever expressed to me the view that clergy behave on average morally better than laypeople, despite all their immersion in religious teaching and ethical conversation. Maybe in part this is modesty on behalf of their profession. But in most of their voices, I also hear something that sounds like genuine disappointment, some remnant of the young adult who had headed off to seminary hoping it would be otherwise.

    In a series of empirical studies – mostly in collaboration with the philosopher Joshua Rust of Stetson University – I have empirically explored the moral behavior of ethics professors. As far as I'm aware, Josh and I are the only people ever to have done so in a systematic way.

    Here are the measures we looked at: voting in public elections, calling one's mother, eating the meat of mammals, donating to charity, littering, disruptive chatting and door-slamming during philosophy presentations, responding to student emails, attending conferences without paying registration fees, organ donation, blood donation, theft of library books, overall moral evaluation by one's departmental peers based on personal impressions, honesty in responding to survey questions, and joining the Nazi party in 1930s Germany.

    [continued in the full article here]

    Wednesday, July 08, 2015

    Profanity Inflation, Profanity Migration, and the Paradox of Prohibition

    As a fan of profane language judiciously employed, I fear that the best profanities of English are cheapening from overuse -- or worse, that our impulses to offend through profane language are beginning to shift away from harmless terms toward more harmful ones.

    I am inspired to these thoughts by Rebecca Roache's recent Philosophy Bites podcast on swearing.

    Roache distinguishes between objectionable slurs (especially racial slurs) and presumably harmless swear words like "fuck". The latter words, she suggests, should not be forbidden, although she acknowledges that in some contexts it might be inappropriate to use them. Roache also suggests that it's silly to forbid "fuck" while allowing obvious replacements like "f**k" or "the f-word". Roache says, "We should swear more, and we shouldn't use asterisks, and that's fine." (31:20).

    I disagree. Overstating somewhat, I disagree because of this:

    "Fuck" is a treasure of the English language. Speakers of other languages will sometimes even reach across the linguistic divide to relish its profanity. "Fuck" is a treasure precisely because it is forbidden. Its being forbidden is the source of its profane power and emotional vivacity.

    When I was growing up in California in the 1970s, "fuck" was considered the worst of the seven words you can't say on TV. You would never hear it in the media, or indeed -- in my posh little suburb -- from any adults, except maybe, very rarely, from some wild man from somewhere else. I don't think I heard my parents or any of their friends say the word even once, ever. It wasn't until fourth grade that I learned that the word existed. What a powerful word, then, for a child to relish in the quiet of his room, or to suddenly drop on a friend!

    "Fuck" is in danger. Its power is subsiding from its increased usage in the public sphere. Much as the overprinting of money devalues it, profanity inflation risks turning "fuck" into another "damn". The hundred-dollar-bill of swear words doesn't buy as much shock as it used to. (Yes, I sound like an old curmudgeon -- but it's true!)

    Okay, a qualification: I'm pretty sure what I've just said is true for the suburban California dialect; but I'm also pretty sure "fuck" was never so powerful in some other dialects. Some evidence of its increased usage overall, and its approach toward "damn", is this Google NGram of "fuck", "shit", and "damn" in "lots of books", 1960-2008:

    [click to enlarge]

    A further risk: As "fuck" loses its sting and emotional vivacity, people who wish to use more vividly offensive language will find themselves forced to other options. The most offensive alternative options currently available in English are racial slurs. But unlike "fuck", racial slurs are plausibly harmful in ordinary use. The cheapening of "fuck" thus risks forcing the migration of profanity to more harmful linguistic locations.

    The paradox of prohibition, then: If the woman in the eCard above wishes to preserve the power of her favorite word, she should cheer for it to remain forbidden. She should celebrate, not bemoan, the existence of standards against the use of "fuck" on major networks, the awarding of demerits for its use in school, and its almost complete avoidance by responsible adults in public contexts. Conversely, some preachers might wish to encourage the regular recitation of "fuck" in the preschool curriculum. (Okay, that last remark was tongue in cheek. But still, wouldn't it work?)

    Despite the substantial public interest in retaining the forbidden deliciousness of our best swear word, I do think that since the word is in fact (pretty close to) harmless, severe restrictions would be unjust. We must really only condemn it with the forgiving standards we usually apply to etiquette violations, even if this results in the term's not being quite as potent as it otherwise would be.

    Finally, let me defend usages like "f**k" and "the f-word". Rather than being silly avoidances because we all know what we're talking about, such decipherable maskings communicate and reinforce the forbiddenness of "fuck". Thus, they help to sustain its power as an obscenity.

    [image source]

    Thursday, July 02, 2015

    How In-Between Cases of Belief Differ Normatively from In-Between Cases of Extraversion

    For twenty years, I've been advocating a dispositional account of belief, according to which to believe that P is to match, to an appropriate degree and in appropriate respects, a "dispositional stereotype" characteristic of the belief that P. In other words: All there is to believing that P is being disposed, ceteris paribus (all else equal or normal or right), to act and react, internally and externally, like a stereotypical belief-that-P-er.

    Since the beginning, two concerns have continually nagged at me.

    One concern is the metaphysical relation between belief and outward behavior. It seems that beliefs cause behavior and are metaphysically independent of behavior. But it's not clear that my dispositional account allows this -- a topic for a future post.

    The other concern, my focus today, is this: My account struggles to explain what has gone normatively wrong in many "in-between" cases of belief.

    The Concern

    To see the worry, consider personality traits, which I regard as metaphysically similar to beliefs. What is it to be extraverted? It is just to match, closely enough, the dispositional stereotype that we tend to associate with being extraverted -- that is, to be disposed to enjoy parties, to be talkative, to like meeting new people, etc. Analogously, on my view, to believe there is beer in the fridge is, ceteris paribus, to be disposed to go to the fridge if one wants a beer, to be disposed to feel surprise if one were to open the fridge and find no beer, to answer "yes" when asked if there is beer in the fridge, etc.

    One interesting thing about personality traits is that people are rarely 100% extravert or 100% introvert, rarely 100% high-strung or 100% mellow. Rather, people tend to be between the extremes, extraverted in some respects but not in others, or in some types of contexts but not in others. One feature of my account of belief which I have emphasized from the beginning is that it easily allows for the analogous in-betweenness: We often match only imperfectly, and in some respects, the stereotype of the believer in racial equality, or of the believer in God, or of the believer that the 19th Street Bridge is closed for repairs. ("The Splintered Mind"!)

    The worry, then is this: There seems to be nothing at all normatively wrong -- no confusion, no failing -- with being an in-between extravert who has some extraverted dispositions and other introverted ones; while in contrast it does seem that typically something has gone wrong in structurally similar cases of in-between believing. If some days I feel excited about parties and other days I loathe the thought, with no particular excuse or explanation for my different reactions, no problem, I'm just an in-between extravert. In contrast, if some days I am disposed to act and react as if Earth is third planet from the Sun and other days I am disposed to act and react as if it is the fourth, with no excuse or explanation, then something has gone wrong. Being an in-between extravert is typically not irrational; being an in-between believer typically is irrational. Why the difference?

    My Answer

    First, it's important not to exaggerate the difference. Too arbitrary an arrangement of, or fluctuation in, one's personality dispositions does seem at least a bit normatively problematic. If I'm disposed to relish the thought of a party when the wall to my left is beige and to detest the thought of a party when the wall to my left is truer white, without any explanatory story beneath, there's something weird about that -- especially if one accepts, as I do, following McGeer and Zawidzki, that shaping oneself to be comprehensible to others is a central feature of mental self-regulation. And on the other hand, some ways of being an in-between believer are entirely rational: for example, having an intermediate degree of confidence or having procedural "how to" knowledge without verbalizable semantic knowledge. But this so far is not a full answer. Wild, inexplicable patterns still seem more forgivable for traits like extraversion than attitudes like belief.

    A second, fuller reply might be this: There is a pragmatic or instrumental reason to avoid wild splintering of one's belief dispositions that does not apply to the case of personality traits. It's good (at least instrumentally good, maybe also intrinsically good?) to be a believer of things, roughly, because it's good to keep track of what's going on in one's environment and to act and react in ways that are consonant with that. Per impossibile, if one were faced with the choice of whether or not to be a creature with the capacity to form dispositional structures in response to evidence that stay mostly stable, except under the influence of new evidence, and which guide one's behavior accordingly, vs. being a creature without the capacity to form such evidentially stable dispositional structures, it would be pragmatically wise to choose to be the former. On average, plausibly, one would live longer and attain more of one's goals. So perhaps the extra normative failing in wildly splintering belief dispositions derives from that. An important part of the value of having stable belief-like dispositional sets is to guide behavior in response to evidence. In normatively defective in-between cases, that value isn't realized. And if one explicitly embraces wild in-betweenness in belief, one goes the extra step of thumbing one's nose at such structures, when one could, instead, try to employ them toward one's ends.

    Whether these two answers are jointly sufficient to address the concern, I haven't decided.

    [Thanks to Sarah Paul and Matthew Lee for discussion.]

    [image source]

    Monday, June 29, 2015

    A New Podcast Interview of Me

    here.

    Thanks to Daniel Bensen for the fun interview! We discuss the rights of artificial intelligences, whether our moral intuitions break down in far-out SF cases, the relationship between science fiction and philosophy, and my recent story "Momentary Sage".

    Thursday, June 25, 2015

    Celebrate the Nerd!

    Here's my definition of a nerd:

    A nerd is someone who loves an intellectual topic, for its own sake, to an unreasonable degree.

    The nerd might be unreasonably passionate about Leibnizian metaphysics, for example -- she studies Latin, French, and German so she can master the original texts, she stays up late reading neglected passages, argues intensely about obscure details with anyone who has the patience to listen. Or she loves twin primes in that same way, or the details of Napoleonic warfare, or the biology of squids. How could anyone care so much about such things?

    It's not that the nerd sees some great practical potential in studying twin primes (though she might half-heartedly try to defend herself in that way), or is responding in the normal way to something that sensible people might study carefully because of its importance (such as a cure for leukemia). Rather the nerd is compelled by an intellectual topic and builds a substantial portion of her life around it, with no justification that would make sense to anyone who is not similarly consumed by that topic. All passions drift free of reasonable justification to some extent, but still there's a difference between moderate passions and passions so extreme and compelling that one is somewhat unbalanced as a result of them. The nerd will sacrifice a lot -- time, money, opportunities -- to learn just a little bit more about her favored topic.

    The secondary features of nerdiness are side effects: The nerd might not care about dressing nicely. She's too busy worrying about the Leibniz Nachlass. The nerd might fail at being cool -- she's not invested in developing the social skills that would be required. The nerd might be introverted: Maybe she really was introverted all along and that's part of why she found herself with her nerdy passions; or maybe she's an introvert partly in reaction to other people's failure to care about squid. Oh, but now squid have come up in the conversation? Her knowledge is finally relevant! The nerd becomes now too eager to deploy her vast knowledge. She won't stop talking. She'll correct all your minor errors. She'll nerdsplain tirelessly at you.

    The nerd needn't possess any of these secondary features: Caring intensely about the Leibniz Nachlass needn't consume one entirely, and so there can still be room for the nerd to care also, in a normal, non-intellectual way, about ordinary things. But the tendency on average will be for nerdy passion to push away other interests and projects, with the result that uncool, shlumpy introverts will be overrepresented among nerds.

    Innate genius might exist. But I don't find the empirical evidence very compelling. What I think passes for innate genius is often just nerdy passion. Meeting the nerd on her own turf, she can appear to be a natural-born genius or talent because she has already thought the topic through so thoroughly that she operates two moves ahead of you and has a chess-master-like recognition of the patterns of intellectual back-and-forth in the area. She has thought repetitively, and from many angles, of the various ways in which pieces of Leibniz might possibly connect, or about the wide range of techniques in prime-number mathematics, or about the four competing theories of squid neural architecture and their relative empirical weaknesses. She dreams them at night. How could you hope to keep up? She will also master related domains so that she exceeds you there, too -- early modern philosophy generally and abstract metaphysics, say, for the Leibniz nerd. Other aspects of her mind might not be so great -- just ask her to fix a faucet or find her way around downtown -- but meet her anywhere near her turf and she'll scorch right past you. If she is good enough also at exuding an aura of intelligence (not all nerds are, but it's a social technique that pairs well with nerdiness), then you might attribute her overperformance on Leibniz to her innate brilliance, her underperformance in plumbing to her not giving a whit.

    Movies like Good Will Hunting drive me nuts, because they feed the impression that intellectual accomplishment is the result of an innate gift, rather than the result of nerdy passion. In this way, they are antithetical to the vision of nerdiness that I want to celebrate. A janitor who doesn't care (much?) about math but is innately great at it -- and somehow also knows better than history graduate students what's going on in obscure texts in their field? Such innate-genius movies rely on the fixed mindset that Carol Dweck has criticized. What I think I see in the nerdy eminences I have met is not so much innate genius as years of thought inspired by passion for stuff that no one sensible would care so much about.

    Society needs nerds. If we want to know as much as a society ought to know about Leibniz and about squids, we benefit from having people around who are so unreasonably passionate about these things that they will master them to an amazing degree. There's also just something glorious about a world that contains people who care as passionately about obscure intellectual topics as the nerd does.

    **** Celebrate the nerd! ****

    [image source, image source]

    Thursday, June 18, 2015

    Why Do We Care about Discovering Life, Exactly?

    It would be exciting to discover life on another planet -- no doubt about that! But why would it be exciting?

    Let's start with a contrast: the possibility of finding intelligence that is not alive -- a robot or a god, without means of reproduction. (Standard textbook definitions, philosophy of biology, and NASA-sponsored discussions all tend to define "life" partly in terms of reproduction.) I'm inclined to think that the search for extra-terrestrial life would have been successful in its aims if we discovered a manufactured robot or a non-reproducing god, even if such beings are not technically alive or are only borderline cases of living things. So maybe what we call the "search for life" is better conceptualized as the search for... well, what exactly?

    (Could we discover evidence of a god -- a creator being who exists outside of our space and time? I don't see why not, at least hypothetically. Maybe we find a message in the stars: "Hey, God here! Ask me for a miracle and I will produce one!")

    The robot and god cases might suggest that what we really care about is finding intelligence. SETI, for example, takes that as its explicit goal: the Search for Extra-Terrestrial Intelligence. But an emphasis on intelligence appears to underestimate our target. We'd be excited to find microbes on Mars or Europa -- and the search for extra-terrestrial life would rightly be regarded as having met with success (though not the most exciting form of success) -- despite microbes' lack of intelligence.

    Or do microbes possess some sort of minimal intelligence? They engage in behaviors that sustain their homeostasis, repelling some substances and consuming others, for example, in a way that preserves their internal order. This type of "intelligence" is also part of standard definitions of life. Maybe, then, order-preserving homeostasis is what excites us? But then, Jupiter's Great Red Spot does something something similar, but we don't seem to think of it as the kind of thing we're looking for in searching for life.

    Are we looking, then, for complexity? Maybe a microbe is more complex that the Great Red Spot. (I don't know. Measuring complexity is a vexed issue.) But sheer complexity doesn't seem like what we're after. Galaxies are complex, and the canyons of Mars are complex, and there are subtle, complex variations in cosmic background radiation -- all very interesting, but the search for life appears to be something different, not just a search for complexity.

    Maybe discovering life would be interesting because it would give us a glimpse of our potential past? Life on Earth evolved up from microbes, but it's still obscure how. Seeing microbial life elsewhere might illuminate our own origins. Maybe, if it's very different from us, it will also illuminate the contingency of our origins.

    Maybe discovering life would be interesting because it would complete the Copernican revolution, which knocked human beings out of the center of the cosmos? Earth is still special in being the only planet known to have life, and maybe that sense of specialness is still implicit in our thinking. Finding life elsewhere might knock us more fully from the center of the cosmos.

    Maybe discovering life would be interesting because it would be a discovery of something with awesome potential? Reproduction might work its way back into our considerations here. Microbes can reproduce and thus evolve, and maybe their awesomeless lies partly in the possibility that in a billion years they could give rises to multicellular entities very different from us -- capable of very different forms of consciousness, self-awareness, pleasure and pain, creativity, art.

    Maybe discovering life would be interesting because terrifying -- either because of the threat alternative life forms might directly pose to life on Earth or, more subtly, because if non-technological life is common enough in the universe for us to discover it, then the Great Filter of Fermi's Paradox is more likely to be before us than behind us. (That is, it might be evidence that biological life is common while technological intelligence is rare, and thus that technological civilizations tend to destroy themselves in short order.)

    On the flip side, maybe it would be interesting for its potential use: intelligences with technology to share, non-technological organisms with interesting biologies from which we could learn to construct new medicines or other technologies.

    Would it be interesting in the same way to find remnants of life? I'm inclined to think it would have some of the same interest. If so, and if we're inclined to think, for whatever reason, that technological societies tend to be short-lived, then we might dedicate some resources toward detecting possible signs of dead civilizations. Such signs might include solar collectors that interfere with stellar output, or stable compounds in a planet's atmosphere that are unlikely to have arisen except by technological means.

    I see no reason we need to insist on a single answer to questions about what ambitions we do or should have in our search for extra-terrestrial company of some sort. But in the context of space policy it seems worth more extended thought. I'd like to see philosophers more involved in this, since the issues go right to the heart of philosophical questions about what we do and should value in general.

    ------------------------------

    Acknowledgement: This is one of two main issues that struck me during my recent trip to an event on the search for extraterrestrial life, funded by NASA and the Library of Congress. Thanks to LOC, NASA, and the other participants. I discuss the other issue, about our duties to extraterrestrial microbes, here.

    [image source, image source]

    Thursday, June 11, 2015

    What Philosophical Work Could Be

    Academic philosophers in Anglophone Ph.D.-granting departments tend to have a narrow conception of what counts as valuable philosophical work. Hiring, tenure, promotion, and prestige turn mainly on one's ability to write an essay in a particular theoretical, abstract style, normally in reaction to the work of a small group of canonical historical and 20th century figures, on a fairly constrained range of topics, published in a limited range of journals and presses. This is too narrow a view.

    I won't discuss cultural diversity here, which I have addressed elsewhere. Today I'll focus on genre and medium.

    Consider the recency and historical contingency of the philosophical journal article. It's a late 19th century invention. Even as late as the mid-20th century, leading philosophers in Western Europe and North America were doing important work in a much broader range of styles than is typical now. Think of the fictions and difficult-to-classify reflections of Sartre, Camus, and Unamuno, the activism and popular writings of Russell, Dewey's work on educational reform, Wittgenstein's fragments. It's really only with the generation hired to teach the baby boomers that our conception of philosophical work became narrowly focused on the academic journal article, and on books written in that same style.

    (Miguel de Unamuno)

    Consider the future of media. The magazine is a printing-press invention and carries with it the history and limitations of that medium. With the rise of the internet, other possibilities emerge: videos, interactive demonstrations, blogs, multi-party conversations on social media, etc. Is there something about the journal article that makes it uniquely better for philosophical reflection than these other media? (Hint: no.)

    Nor need we think that philosophical work must consist of expository argumentation targeted toward disciplinary experts and students in the classroom. This, too, is a narrow and historically recent conception of philosophical work. Popular essays, fictions, aphorisms, dialogues, autobiographical reflections, and personal letters have historically played a central role in philosophy. We could potentially add, too, public performances, movies, video games, political activism, and interactions with the judicial system and governmental agencies.

    Philosophers are paid to develop expertise in philosophy, to bring that expertise in philosophy into the classroom, and to contribute that expertise to society in part by further advancing philosophical knowledge. A wide range of activities fit within that job description. I am inclined to be especially liberal here for two reasons: First, I have a liberal conception of philosophy as inquiry into big-picture ontological, normative, conceptual, and broadly theoretical issues about anything (including, e.g., hair and football as well as more traditionally philosophical topics). I favor treating a wide range of inquiries as philosophical, only a small minority of which happen in philosophy departments. And second, I have a liberal conception of "inquiry" on which sitting at one's desk reading and writing expository arguments is only one sort of inquiry. Engaging with the world, trying out one's ideas in action, seeing the reactions of non-academics, exploring ideas in fiction and meditation -- these are also valuable modes of inquiry that advance our philosophical knowledge, activities in which we not only deploy our expertise but cultivate and expand it, influencing society and, in a small or a large way, the future of both academic philosophy and non-academic philosophical inquiry.

    Research-oriented philosophy departments tend to regard writing for popular media or consulting with governmental agencies as "service", which is typically held in less esteem than "research". I'm not sure service should be held in less esteem; but I would also challenge the idea that such work is not also partly research. If one approaches popular writing as a means of "dumbing down" pre-existing philosophical ideas for an audience of non-experts whose reactions one does not plan to take seriously, then, yes, that popular writing is not really research. But if the popular essay is itself a locus of philosophical creativity, where philosophical ideas are explored in hopes of discovering new possibilities, advancing (and not just marketing) one's own thinking, furthering the community's philosophical dialogue in a way that might strike professional philosophers, too, as interesting rather than merely familiar re-hashing, and if it's done in a way that is properly intellectually responsive to the work of others, then it is every bit as much "research" as is a standard journal article. Analogously with consulting -- and with Twitter feeds, TED videos, and poetry.

    I urge our discipline to conceptualize philosophical work more broadly than we typically do. A Philosophical Review article can be an amazing, awesome thing. Yes! But we should see journal articles of that style, in that type of venue, as only one of many possible forms of important, field-shaping philosophical work.

    Thursday, June 04, 2015

    Space Agencies Need, but Don't Appear to Have, Policies Governing Contact with Microbial Life on Mars

    NASA and other leading space agencies do not appear to have formal policies about how to treat microbial life if it's found elsewhere in the solar system. I find this surprising.

    I still need to do a more thorough search to be confident of this. However, last week when I went to an event jointly sponsored by NASA and the Library of Congress, the people I spoke to there seemed to think that there's no worked-out formal policy; nor have I found such a policy in subsequent internet searches. (Please correct me by email or in the comments below if I'm wrong!)

    NASA and other space agencies do have rigorous and detailed protocols regarding the cross-contamination of microbial life between planets. If you want to send a lander to Mars, it must be thoroughly sterilized. Likewise, extensive protocols are being developed to protect Earth from possible extra-terrestrial microbes in returned samples. NASA has an Office of Planetary Protection that focuses on these issues. However contact with microbial life raises ethical issues besides cross-contamination.

    Suppose NASA discovers a patch of microbes on Mars.

    Presumably, NASA scientists will want to test it -- to see how similar Martian life is to Earthly life, for example. Testing it might involve touching it. Maybe NASA scientists will want a rover to scoop up a sample for chemical analysis. But that would mean interfering with the organisms, exposing them to risk. Even just shining light on microbes to examine them more closely is a form of interference that presents some risk -- even the shadow of a parked rover creates a small degree of interference and risk. How much interference with extraterrestrial microbial life is acceptable? How much risk? These questions will rise acutely as soon as we discover extraterrestrial life. In fact, proving that we have actually discovered life might already involve some interference, especially if the sample is ambiguous or subsurface. These questions are quite independent of existing regulations about sterilization and contamination. We need to consider them now, in advance, before we discover life. Otherwise, NASA leaders might be in the position of making these decisions on the fly, without sufficient public input or oversight.

    Here's another question in the ethics of contact: Suppose we discover a species of microbe that appears to be under threat of extinction due to local environmental conditions. Should we employ something like a "Prime Directive" policy, on the microbial level: no interference, even if that means extinction? Or should we take positive steps toward alien species protection?

    Planetary protection policies that focus on contamination risk seem to rely on standard top-down regulatory models requiring compliance to a fixed set of detailed rules, but I wonder if a better model might be university Institutional Review Boards for the protection of human participants (IRBs) and Animal Care and Use Committees (ACUCs). Such committees have three appealing features:

    First, rather than a rigid set of rules, IRBs and ACUCs employ a flexible set of general guidelines. The guidelines governing research on human participants tend to be very conservative about risk in general; but the committee is also charged with weighing risks against benefits. In the context of extraterrestrial microbiology, a reasonable standard might be extreme caution about interference, but one that allows, for example, a small sample to be very carefully taken from a large, healthy microbial colony, for experimentation and then careful disposal without re-release into the planetary environment. As reflection on this example suggests, people might have very different ethical opinions about how much risk and interference is appropriate, and of what sort. Also, expert scientists will want to think in advance about assessing the sources of risk and what feasible steps can be taken to minimize those risks, contingent on various types of possible preliminary information about the microbe's structure and habitat. I do not see evidence that these issues are being given the serious thought, with public input, that they need to be given.

    Second, IRBs and ACUCs are normally constituted by a mix of scientist and non-scientist members, the latter typically drawn from the general public (often lawyers and schoolteachers). The scientists bring their scientific expertise which is essential to evaluating the risks and possible benefits, but the non-scientist members play an important role in expressing general community values and in keeping the scientists from possibly going too easy on their scientist friends, as well as sometimes specific expertise on related non-scientific issues. In the context of the treatment of extraterrestrial microbial life, a mixed committee also seems important. It shouldn't only be the folks at the space agencies who are making these calls.

    Third, IRBs and ACUCs assess specific protocols in advance of the implementation of those protocols. This should be done where feasible, while also recognizing that some decisions may need to be made urgently without pre-approval when unexpected events occur.

    I think we should begin to establish moderately specific national and international guidelines governing human interaction with microbial life elsewhere in the solar system, in which contamination is regarded as only one issue among several; that we should formulate these guidelines after broad input not only from scientists but also from the general public and from people with expertise in risk and research ethics; and that we should form committees, modeled on IRBs and ACUCs, of people who understand these guidelines and stand ready to evaluate proposals at the very moment we discover extraterrestrial life.

    NASA, ESA, etc., what do you think?

    [image source]

    Friday, May 29, 2015

    The Immortal's Dilemma

    Most of the philosophical literature on immortality and death -- at least that I've read -- doesn't very thoroughly explore the consequences of temporal infinitude. Bernard Williams, for example, suggests that 342 years might be a tediously long life. Well, of course 342 years is peanuts compared to infinitude!

    It seems to me that true temporal infinitude forces a dilemma between two options:
    (a.) infinite repetition of the same things, without memory, or
    (b.) an ever-expanding range of experiences that eventually diverges so far from your present range of experiences that it becomes questionable whether you should regard that future being as "you" in any meaningful sense.

    Call this choice The Immortal's Dilemma.

    Given infinite time, a closed system will eventually cycle back through its states, within any finite error tolerance. (One way of thinking about this is the Poincare recurrence theorem.) There are only so many relevantly distinguishable states a closed system can occupy. Once it has occupied them, it has to start repeating at least some of them. Assuming that memory belongs to the system's structure of states, then memory too is among those things that must start afresh and repeat. But it seems legitimate to wonder whether the forgetful repetition of the same experiences, infinitely again and again, is something worth aspiring toward -- whether it's what we can or should want, or what we thought we might want, in immortality.

    It might seem better, then, or more interesting, or more worthwhile, to have an open system. Unless the system is ever-expanding, though, or includes an ever-expanding population of unprecedented elements, eventually it will loop back around. Thus, given any finite error tolerance, eventually events will have to get more and more remote from the original run of events you lived through -- with no end to the increasing remoteness.

    Suppose that conscious experience is what matters. (Parallel arguments can be made for other ways of thinking about what matters.) First, one might cycle through every possible human experience. Suppose, for example, that human experience depends on a brain of no more than a hundred trillion neurons (currently we have a hundred billion, but that might change), and that each neuron is capable of one of a hundred trillion relevantly distinguishable states, and that any difference in even one neuron in the course of a ten-second "specious present" results in a relevantly distinguishable experience. A liberal view of the relationship between different neural states and different possible experiences!

    Of course such numbers, though large, are still finite. So once you're done living through all the experiences of seeming-Aristotle, seeming-Gandhi, seeming-Hitler, seeming-Hitler-seeming-to-remember-having-earlier-been-Gandhi, seeming-future-super-genius, and seeming-every-possible-person-else and many, many more experiences that probably wouldn't coherently belong to anyone's life, well, you've either got to settle in for some repetition or find some new range of experiences that include experiences that are no longer human. [Clarification June 1: Not all these states need occur, but that only shortens the path to looping or alien weirdness.] Go through the mammals. Then go through hypothetical aliens. Expand, expand -- eventually you'll have run through all possible smallish creatures with a neural or similar basis and you'll need to go to experiences that are either radically alien or vastly superhuman or both. At some point -- maybe not so far along in this process -- it seems reasonable to wonder, is the being who is doing all this really "you"? Even if there is some continuous causal thread reaching back to you as you are now, should you, as you are now, care about that being's future any more than you care about the future of some being unrelated to you?

    Either amnesic infinite repetition or a limitless range of unfathomable alien weirdness. Those appear to be the choices.

    References to good discussions of this in the existing literature welcome in the comments section!

    [Thanks particularly to Benjamin Mitchell-Yellin for discussion.]

    Related posts:
    Nietzsche's Eternal Recurrence, Scrambled Sideways (Oct. 31, 2012)
    My Boltzmann Continuants (Jun. 6, 2013)
    Goldfish-Pool Immortality (May 30, 2014)
    Duplicating the Universe (Apr. 29, 2015)

    [image source]

    Thursday, May 21, 2015

    Leading SF Novels: Academic Library Holdings and Citation Rates

    Among the most culturally influential English-language fiction writers of the 20th century, a substantial portion wrote science fiction or fantasy -- "speculative fiction" (SF) broadly construed. H.G. Wells, J.R.R. Tolkien, George Orwell, Isaac Asimov, Philip K. Dick, and Ursula K. Le Guin, for starters. In the 21st century so far, speculative fiction remains culturally important. There's sometimes a feeling among speculative fiction writers that even the best recent work in the genre isn't taken seriously by academic scholars. I thought I'd look at a couple possible (imperfect!) measures of this.

    (I'm doing this partly just for fun, 'cause I'm a dork and I find this kind of thing relaxing, if you'll believe it.)

    Holdings of recent SF in academic libraries

    I generated a list of critically acclaimed SF novels by considering Hugo, Nebula, and World Fantasy award winners from 2009-2013 plus any non-winning novels that were among the 5-6 finalists for at least two of the three awards. Nineteen novels met the criteria.

    Then I looked at two of the largest Anglophone academic library holdings databases: COPAC and Melvyl, and counted how many different campuses (max 30-ish) had a print copy of the book [see endnote for details].

    H = Hugo finalist, N = Nebula finalist, W = World Fantasy finalist; stars indicate winners.

    The results, listed from most held to least:

    16 campuses: Neil Gaiman, The Graveyard Book (H*W)
    15: George R.R. Martin, A Dance with Dragons (HW)
    15: China Mieville, The City & the City (H*NW*)
    12: Cory Doctorow, Little Brother (HN)
    12: Ursula K. Le Guin, Powers (N*)
    12: China Mieville, Embassytown (HN)
    12: Connie Willis, Blackout / All Clear (H*N*)
    11: Paolo Bacigalupi, The Windup Girl (HN*)
    11: G. Willow Wilson, Alif the Unseen (W*)
    10: Kim Stanley Robinson, 2312 (HN*)
    8: N.K. Jemisin, The Hundred Thousand Kingdoms (HNW)
    8: N.K. Jemisin, The Killing Moon (NW)
    8: Jon Scalzi, Redshirts (H*)
    8: Jeff VanderMeer, Finch (NW)
    8: Jo Walton, Among Others (H*N*W)
    7: Cherie Priest, Boneshaker (HN)
    7: Caitlin Kiernan, The Drowning Girl (NW)
    5: Nnedi Okorafor, Who Fears Death (NW*)
    3: Saladin Ahmed, Throne of the Crescent Moon (HN)

    As a reference point, I did a similar analysis of PEN/Faulkner award winners and finalists over the same period.

    Of the 25 PEN winners and finalists, 7 were held by more campuses than was any book on my SF list, though the difference was not extreme, with two at 24 (Jennifer Egan, A Visit from the Goon Squad; Joseph O'Neill, Netherland) and five ranging from 18-21 campuses. In the PEN group, just as in the SF group, there were nine books held by fewer than ten of the campuses (3, 5, 6, 7, 7, 7, 9, 9, 9) -- so the lower part of the lists looks pretty similar.

    References in Google Scholar

    Citation patterns in Google Scholar tell a similar story. Although citation rates are generally low by philosophy and psychology standards (assuming as a comparison group the most-praised philosophy and psychology books of the period), they are not very different between the SF and PEN lists. The SF books for which I could find five or more Google Scholar citations:

    53 citations: Gaiman, The Graveyard Book
    52: Doctorow, Little Brother
    27: Martin, A Dance with Dragons
    26: Bacigalupi, The Windup Girl
    9: Priest, Boneshaker
    8: Robinson, 2312
    5: Okorafor, Who Fears Death

    The top-cited PEN books were at 70 (O'Neill, Netherland) and 59 (Egan, A Visit from the Goon Squad). After those two, there's a gap down to 17, 15, 12, 11, 10.

    I continue to suspect that there is a bit of a perception difference between "highbrow" literary fiction and "middlebrow" SF, disadvantaging SF studies in some quarters of the university; but if so, perhaps that is compensated by recognition of SF's broader visibility in popular culture, so that in terms of overall scholarly attention, it appears to be approximately a tie.

    ---------------------------------

    Bestsellers:

    So... hey! That makes me wonder about bestsellers. I've taken the four best selling fiction books each year from 2009-2013 (according to USA Today for 2009-2012, Nielsen Book Scan for 2013) and tried the same. (The catalogs are a bit messier since these books tend to have multiple editions, so the numbers are a little rougher.)

    Top five by citations (# of campuses in parens):

    431: Suzanne Collins, The Hunger Games (23)
    333: Stephanie Meyer, Twilight (26)
    162: Stephanie Meyer, Breaking Dawn (17)
    132: Stephanie Meyer, New Moon (15)
    130: Steig Larsson, The Girl with the Dragon Tattoo (12)

    Only 4 of the 19 had fewer than 10 citations, and all were held by at least six campuses.

    So by both of these measures, bestsellers are receiving more academic attention than either the top critically acclaimed SF or PEN. Notable: By my count, 8 of the 19 bestsellers are SF, including all of the top-four most cited.

    Maybe that's as is should be: The Hunger Games and Twilight are major cultural phenomena, worthy of serious discussion for that sake alone, in addition to whatever merits they might have as literature.

    ---------------------------------

    Endnote:
    COPAC covers the major British and Irish academic libraries, Melvyl the ten University of California campuses. I counted up the total number of campuses in the two systems with at least one holding of each book, limiting myself to print holdings (electronic and audio holdings were a bit disorganized in the databases, and spot checking suggested they didn't add much to the overall results since most campuses with electronic or audio also had print of the same work).

    As always, corrections welcome!