[go: up one dir, main page]

Wednesday, October 30, 2024

The Ethics of Harmonizing with the Dao

Reading the ancient Chinese philosophers Xunzi and Zhuangzi, I am inspired to articulate an ethics of harmonizing with the dao (the "way"). This ethics doesn't quite map onto any of the three conceptualizations of ethics that are standard in Western philosophy (consequentialism, deontology, and virtue ethics), nor is it exactly a "role ethics" of the sort sometimes attributed to ancient Confucians.

Xunzi

The ancient Confucian Xunzi articulates a vision of the world in which Heaven, Earth, and humanity operate in harmony:

Heaven has its proper seasons,
Earth has its proper resources,
And humankind has its proper order,
-- this is called being able to form a triad
(Ch 17, l. 34-37; Hutton trans. 2014, p. 176).

Heaven (tian, literally the sky, but with strong religious associations) and Earth are jointly responsible for what we might now call the "laws of nature" and all "natural" phenomena -- including, for example, the turning of the seasons, the patterns of wind and rain, the tendency for plants and animals to thrive under certain conditions and wither under other conditions. Also belonging to these natural phenomena are the raw materials with which humans work: not only the raw materials of wood, metal, and fiber, but also the raw material of natural human inclinations: our tendency to enjoy delicious tastes, our tendency to react angrily to provocations, our general preference for kin over strangers.

Xunzi views humanity's task as creating the third corner of a triad with Heaven and Earth by inventing customs and standards of proper behavior that allow us to harmonize with Heaven and Earth, and with each other. For example, through trial and error, our ancestors learned the proper times and methods for sowing and reaping, how to regulate flooding rivers, how to sharpen steel and straighten wood, how to make pots that won't leak, how to make houses that won't fall over, and so on. Our ancestors also -- again through trial and error -- learned the proper rituals and customs and standards of behavior that permit people to coexist harmoniously with each other without chaotic conflict, without excessive or inappropriate emotions, and with an allocation of goods that allow all to flourish according to their status and social role.

Following the dao can be conceptualized for Xunzi, then, as aligning harmoniously into this triad. Abide by the customs and standards of behavior that contribute to the harmonious whole, in which crops are properly planted, towns are properly constructed, the crafts flourish, and humans thrive in an orderly society.

Each of us has a different role, in accord with the proper customs of a well-ordered society: the barley farmer has one role, the soldier another role, the noblewoman yet another, the traveling merchant yet another. It's not unreasonable to view Xunzi's ethics as a kind of role ethics, according to which the fundamental moral principle is that one adheres to one's proper role in society. It's also not unreasonable to think of the customs and standards of proper behavior as a set of rules to which one ought to adhere (those rules applying in different ways according to one's position in society), and thus to view Xunzi's ethics as a kind of deontological (rule-based) ethics. However, there might also be room to interpret harmonious alignment with the dao as the most fundamental feature of ethical behavior. Adherence to one's role and to the proper traditional customs and practices, on this interpretation of Xunzi, would be only derivatively good, because doing so typically constitutes harmonious alignment.

A test case is to imagine, through Xunzi's eyes, whether a morally well-developed sage might be ethically correct sometimes to act contrary to their role and to the best traditional standards of good behavior, if they correctly see that by doing so they contribute better to the overall harmony of Heaven, Earth, and humankind. I'm tempted to think that Xunzi would indeed permit this -- though only very cautiously, since he is pessimistic about the moral wisdom of ordinary people -- and thus that for him harmonious alignment with the dao is more fundamental than roles and rules. However, I'm not sure I can find direct textual support in favor of this interpretation; it's possible I'm being overly "charitable".

[image source]

A Zhuangzian Correction

A Xunzian ethics of this sort is, I think, somewhat attractive. But it is also deeply traditionalist and conformist in a way I find unappealing. It could use a Zhuangzian twist -- and the idea of "harmonizing with the dao" is at least as Zhuangzian (and "Daoist") as it is Confucian.

Zhuangzi imagines a wilder, more wondrous cosmos than Xunzi's neatly ordered triad of Heaven, Earth, and humankind -- symbolized (though it's disputable how literally) by people so enlightened that they can walk without touching the ground; trees that count 8000 years as a single autumn; gracious emperors with no eyes, ears, nose, or mouth; people with skin like frost who live by drinking dew; enormous, useless trees who speak to us in dreams; and more. This is the dao, wild beyond human comprehension, with which Zhuangzi aims to harmonize.

There are, I think, in Zhuangzi's picture -- though he would resist any effort to fully capture it in words -- ways of flowing harmoniously along with this wondrous and incomprehensible dao and ways of straining unproductively against it. One can be easygoing and open-minded, welcome surprise and difference, not insist on jamming everything into preconceived frames and plans; and one can contribute to the delightful weirdness of the world in one's own unique way. This is Zhuangzian harmony. You become a part of a world that is richer and more wondrous because it contains you, while allowing other wonderful things to also naturally unfold.

In a radical reading of Zhuangzi, ethical obligations and social roles fall away completely. There is little talk in Zhuangzi's Inner Chapters, for example, of our obligation to support others. I don't know that we have to read Zhuangzi radically; but regardless of that question of interpretation, I suggest that there's an attractive middle between Xunzi's conventionalism and Zhuangzi's wildness. Each can serve as a corrective to the other.

In the ethical picture that emerges from this compromise, we each contribute uniquely to a semi-ordered cosmos, participating in social harmony, but not rigidly -- also transcending that harmony, breaking rules and traditions for the better, making the world richer and more wondrous, each in our diverse ways, while also supporting others who contribute in their different ways, whether those others are human, animal, plant, or natural phenomena.

Contrasts

This is not a consequentialist ethics: It is not that our actions are evaluated in terms of the good or bad consequences they have (and still less that the actions are evaluated by a summation of the good minus the bad consequences). Instead, harmonizing with the dao is to participate in something grand, without need of a further objective. Like the deontologist, Xunzi and Zhuangzi and my imagined compromise philosopher needn't think that right or harmonious action will always have good long-term results. Nor is it a deontological or role ethics: There is no set of rules one must always follow or some role one must always adhere to. Nor is it a virtue ethics: There is no set of virtues to which we all must aspire or a distinctive pattern of human flourishing that constitutes the highest attainment. We each contribute in different ways -- and if some virtues often prove to be important, they are derivatively important in the same way that rules and roles can be derivatively important. They are important only because, and to the extent, having those virtues enables or constitutes one's contribution to the magnificent web of being.

So although there are resonances with the more pluralistic forms of consequentialism, and virtue ethics, and role ethics, and even deontology (trivially or degenerately, if the rule is just "harmonize with the dao"), the classical Chinese ethical ideal of harmonizing with the dao differs somewhat from all of these familiar (to professional philosophers) Western ethical approaches.

Many of these other approaches also contain an implicit intellectualism or elitism, in which ideal ethical goodness requires intellectual attainment: wisdom, or a sophisticated ability to weigh consequences or evaluate and apply rules -- far beyond, for example, the capacities of someone with severe cognitive disabilities. With enough Zhuangzi in the mix, such elitism evaporates. A severely cognitively disabled person, or a magnificently weird nonhuman animal, might far exceed any ordinary adult philosopher in their capacity to harmonize with the dao and might contribute more to the rich tapestry of the world.

Perhaps an ethics of harmonizing with the dao can resonate with some 21st-century Anglophone readers, despite its origins in ancient China. It is not, I think, as alien as it might seem from its reliance on the concept of dao and its failure to fit into the standard ethical triumvirate of consequentialism, deontology, and virtue ethics. The fundamental idea should be attractive to some: We each contribute by instantiating a unique piece of a magnificent world, a world which would be less magnificent without us.

Tuesday, October 22, 2024

An Objection to Chalmers's Fading Qualia Argument

[Note: This is a long and dense post. Buckle up.]

In one chapter of his influential 1996 book, David Chalmers defends the view that consciousness arises in virtue of the functional organization of the brain rather than in virtue of the brain's material substrate.  That is, if there were entities that were functionally/organizationally identical to humans but made out of different stuff (e.g. silicon chips), they would be just as conscious as we are.  He defends this view, in part, with what he calls the Fading Qualia Argument.  The argument is enticing, but I think it doesn't succeed.

Chalmers, Robot, and the Target Audience of the Argument

Drawing on thought experiments from Pylyshyn, Savitt, and Cuda, Chalmers begins by imagining two cases: himself and "Robot".  Robot is a functional isomorph of Chalmers, but constructed of different materials.  For concreteness (but this isn't essential), we might imagine that Robot has a brain with the exact same neural architecture as Chalmers' brain, except that the neurons are made of silicon chips.

Because Chalmers and Robot are functional isomorphs, they will respond in the same way to all stimuli.  For example, if you ask Robot if it is conscious, it will emit, "Yes, of course!" (or whatever Chalmers would say if asked that question).  If you step on Robot's toe, Robot will pull its foot back and protest.  And so on.

For purposes of this argument, we don't want to assume that Robot is conscious, despite its architectural and functional similarity to Chalmers.  The Fading Qualia Argument aims to show that Robot is conscious, starting from premises that are neutral on the question.  The aim is to win over those who think that maybe being carbon-based or having certain biochemical properties is essential for consciousness, so that a functional isomorph made of the wrong stuff would only misleadingly look like it's conscious.  The target audience for this argument is someone concerned that for all Robot's similar mid-level architecture and all of its seeming "speech" and "pain" behavior, Robot really has no genuinely conscious experiences at all, in virtue of lacking the right biochemistry -- that it's merely a consciousness mimic, rather than a genuinely conscious entity.

The Slippery Slope of Introspection

Chalmers asks us to imagine a series of cases intermediate between him and Robot.  We might imagine, for example, a series each of whose members differs by one neuron.  Entity 0 is Chalmers.  Entity 1 is Chalmers with one silicon chip neuron replacing a biological neuron.  Entity 2 is Chalmers with two silicon chip neurons replacing two biological neurons.  And so on to Entity N, Robot, all of whose neurons are silicon.  Again, the exact nature of the replacements isn't essential to the argument.  The core thought is just this: Robot is a functional isomorph of Chalmers, but constructed of different materials; and between Chalmers and Robot we can construct a series of cases each of which is only a tiny bit different from its neighbors.

Now if this is a coherent setup, the person who wants to deny consciousness to Robot faces a dilemma.  Either (1.) at some point in the series, consciousness suddenly winks out -- between Entity I and Entity I+1, for some value of I.  Or (2.) consciousness somehow slowly fades away in the series.

Option (1) seems implausible.  Chalmers, presumably, has a rich welter of conscious experience (at least, we can choose a moment at which he does).  A priori, it would be odd if the big metaphysical jump from that rich welter of experience to zero experience would occur with an arbitrarily tiny change between Entity I and Entity I+1.  And empirically, our best understanding of the brain is that tiny, single-neuron-and-smaller differences rarely have such dramatic effects (unless they cascade into larger differences).  Consciousness is a property of large assemblies of neurons, robust to tiny changes.

But option (2) also seems implausible, for it would seem to involve massive introspective error.  Suppose that Entity I is an intermediate case with very much reduced, but not entirely absent, consciousness.  Chalmers suggests that instead of having bright red visual experience, Entity I has tepid pink experience.  (I'm inclined to think that this isn't the best way to think about fading or borderline consciousness, since it's natural to think of pink experiences as just different in experienced content from red cases, rather than less experiential than red cases.  But as I've argued elsewhere, genuinely borderline consciousness is difficult or impossible to imaginatively conceive, so I won't press Chalmers on this point.)

By stipulation, since Entity I is a functional isomorph, it will give the same reports about its experience as Chalmers himself would.  In other words, Entity I -- despite being barely or borderline conscious -- will say "Oh yes, I have vividly bright red experiences -- a whole welter of exciting phenomenology!"  Since this is false of Entity I, Entity I is just wrong about that.  But also, since it's a functional isomorph, there's no weird malfunction going on either, that would explain this strange report.  We ordinarily think that people are reliable introspectors of their experience; so we should think the same of Entity I.  Thus, option (2), gradual fading, generates an implausible tension: We have to believe that Entity I is radically introspectively mistaken; but that involves committing to an implausible degree of introspective error.

Therefore, neither option (1) nor option (2) is plausible.  But if Robot were not conscious, either (1) or (2) would have to be true for at least one Entity I.  Therefore, Robot is conscious.  And therefore, functional isomorphism is sufficient for consciousness.  It doesn't matter what materials an entity is made of.

We Can't Trust Robot "Introspection"

I acknowledge that it's an appealing argument.  However, Chalmers' response to option (2) should be unconvincing to the argument's target audience.

I have argued extensively that human introspection, even of currently ongoing conscious experience, is highly unreliable.  However, my reply today won't lean on that aspect of my work.  What I want to argue instead is that the assumed audience for this argument should not think that the introspection (or "introspection" -- I'll explain the scare quotes in a minute) of Entity I is reliable.

Recall that the target audience for the argument is someone who is antecedently neutral about Robot's consciousness.  But of course by stipulation, Robot will say (or "say") the same things about its experiences that Chalmers will say.  Just like Chalmers, and just like Entity I, it will say "Oh yes, I have vividly bright red experiences -- a whole welter of exciting phenomenology!"  The audience for Chalmers' argument must therefore initially doubt that such statements, or seeming statements, as issued by Robot, are reliable signals of consciousness.  If the audience already trusted these reports, there would be no need for the argument.

There are two possible ways to conceptualize Robot's reports, if they are not accurate introspections: (a.) They might be inaccurate introspections.  (b.) They might not be introspections at all.  Option (a) allows that Robot, despite lacking conscious experience, is capable of meaningful speech and is capable of introspecting, though any introspective reports of consciousness will be erroneous.  Option (b) is preferred if we think that genuinely meaningful language requires consciousness and/or that no cognitive process that fails to target a genuinely conscious experience in fact deserves to be called introspection.  On option (b) Robot only "introspects" in scare quotes.  It doesn't actually introspect.

Option (a) thus assumes introspective fallibilism, while option (b) is compatible with introspective infallibilism.

The audience who is to be convinced by the slow-fade version of the Fading Qualia Argument must both trust the introspective reports (or "introspective reports") of the intermediate entities while not trusting those of Robot.  Given that some of the intermediate entities are extremely similar to Robot -- e.g., Entity N-1, who is only one neuron different -- it would be awkward and implausible to assume reliability for all the intermediate entities while not doing so for Robot.

Now plausibly, if there is a slow fadeout, it's not going to be still going on with an entity as close to Robot as Entity N-1, so the relevant cases will be somewhere nearer the middle.  Stipulate, then, two values I and J not very far separated (0 < I < J < N) such that we can reasonably assume that if Robot in nonconscious, so is Entity J, while we cannot reasonably assume that if Robot is nonconscious, so is Entity I.  For consistency with their doubts about the introspective reports (or "introspective reports") or Robot, the target audience should have similar doubts about Entity J.  But now it's unclear why they should be confident in the reports of Entity I, which by stipulation is not far separated from Entity J.  Maybe it's a faded case, despite its report of vivid experience.

Here's one way to think about it.  Setting aside introspective skepticism about normal humans, we should trust the reports of Chalmers / Entity 0.  But ex hypothesi, the target audience for the argument should not trust the "introspective reports" of Robot / Entity N.  It's then an open question whether we should trust the reports of the relevant intermediate, possibly experientially faded, entities.  We could either generalize our trust of Chalmers down the line or generalize our mistrust of Robot up the line.  Given the symmetry of the situation, it's not clear which the better approach is, or how far down or up the slippery slope we should generalize the trust or mistrust.

For Chalmers' argument to work, we must be warranted in trusting the reports of Entity I at whatever point the fade-out is happening.  To settle this question, Chalmers needs to do more than appeal to the general reliability of introspection in normal human cases and the lack of functional differences between him, Robot, and the intermediate entities.  Even an a priori argument that introspection is infallible will not serve his purposes, because then the open question becomes whether Robot and the relevant intermediate entities are actually introspecting.

Furthermore, if there is introspective error by Entity I, there's a tidy explanation of why that introspective error would be unsurprising.  For simplicity, assume that introspection occurs in the Introspection Module located in the pineal gland, and that it works by sending queries to other parts of the brain, asking questions like "Hey, occipital lobe, is red experience going on there right now?", reaching introspective judgments based on the signals that it gets in reply.  If Entity I has a functioning, biological Introspection Module but a replaced, silicon occipital lobe, and if there really is no red experience going on in the occipital lobe, we can see why Entity I would be mistaken: Its Introspection Module is getting exactly the same signal from the occipital lobe as it would receive if red experience were in fact present.

It's highly doubtful that introspection is as neat a process as I've just described.  But the point remains.  If Entity I is introspectively unreliable, a perfectly good explanation beckons: Whatever cognitive processes subserve the introspective reporting are going to generate the same signals -- including misleading signals, if experience is absent -- as they would in the case where experience is present and accurately reported.  Thus, unreliability would simply be what we should expect.

Now it's surely in some respects more elegant if we can treat Chalmers, Robot, and all the intermediate entities analogously, as conscious and accurately reporting their experience.  The Fading Qualia setup nicely displays the complexity or inelegance of thinking otherwise.  But the intended audience of the Fading Qualia argument is someone who wonders whether experience tracks so neatly onto function, someone who suspects that nature might in fact be complex or inelegant in exactly this respect, such that it's (nomologically/naturally/scientifically) possible to have a behavioral/functional isomorph who "reports" experiences but who in fact entirely lacks them.  The target audience who is initially neutral about the consciousness of Robot should thus remain unmoved by the Fading Qualia argument.

This isn't to say I disagree with Chalmers' conclusion.  I've advanced a very different argument for a similar conclusion: The Copernican Argument for Alien Consciousness, which turns on the idea that it's unlikely that, among all behaviorally sophisticated alien species of radically different structure that probably exist in the universe, humans would be so lucky as to be among the special few with just the right underlying stuff to be conscious.  Central to the Fading Qualia argument in particular is Chalmers' appeal to the presumably reliable introspection of the intermediate entities.  My concern is that we cannot justifiably make that presumption.

Dancing Qualia

Chalmers pairs the Fading Qualia argument with a related but more complex Dancing Qualia argument, which he characterizes as the stronger of the two arguments.  Without entering into detail, Chalmers posits for sake of reductio ad absurdum that the alternative medium (e.g., silicon) hosts experiences but of a different qualitative character (e.g., color inverted).  We install a system in the alternative medium as a backup circuit with effectors and transducers to the rest of the brain.  For example, in addition to having a biological occipital lobe, you also have a functionally identical silicon backup occipital lobe.  Initially the silicon occipital lobe backup circuit is powered off.  But you can power it on -- and power off your biological occipital lobe -- by flipping a switch.  Since the silicon lobe is functionally identical to the biological lobe, the rest of the brain should register no difference.

Now, if you switch between normal neural processing and the backup silicon processor, you should have very different experience (per the assumption of the reductio) but you should not be able to introspectively report that different experience (since the backup circuit interacts identically with the rest of the brain).  That would again be a strange failure of introspection.  So (per the rules of reductio) we conclude that the initial premise was mistaken: Normal neural processing should generate the same types of experience as functionally identical processing in a silicon processor.

(I might quibble that you-with-backup-circuit is not functionally isomorphic to you-without-backup-circuit -- after all, you now have a switch and two different parallel processor streams -- and if consciousness supervenes on the whole system rather than just local parts, that's possibly a relevant change that will cause the experience to be different from the experience of either an unmodified brain or an isomorphic silicon brain.  But set this issue aside.)

The Dancing Qualia argument is vulnerable on the introspective accuracy assumption, much as the Fading Qualia argument is.  Again for simplicity, suppose a biological Introspection Module.  Suppose that what is backed up is the portion of the brain that is locally responsible for red experience.  Ex hypothesi, the silicon backup gives rise to non-red experience but delivers to the Introspection Module exactly the same inputs as that module would normally receive from an organic brain part experiencing red.  This is exactly the type of case where we should expect introspection to be unreliable.

Consider an analogous case of vision.  Looking at a green tree 50 feet away in good light, my vision is reliable.  Now substitute a red tree in the same location and a mechanism between me and the tree such that all the red light is converted into green light, so that I get exactly the same visual input I would normally receive from looking at a green tree.  Even if vision is highly reliable in normal circumstances, it is no surprise in this particular circumstance if I mistakenly judge the red tree to be green!

As I acknowledged before, this is a cartoon model of introspection.  Here's another way introspection might work: What matters is what is represented in the Introspection Module itself.  So if the introspection module says "red", necessarily I experience red.  In that case, in order to get Dancing Qualia, we need to create an alternate backup circuit for the Introspection Module itself.  When we flip the switch, we switch from Biological Introspection Module to Silicon Introspection Module.  Ex hypothesi, the experiences really are different but the Introspection Module represents them functionally in the same way, and the inputs and outputs to and from the rest of the brain don't differ.  So of course there won't be any experiential difference that I would conceptualize and report.  There would be some difference in qualia, but I wouldn't have the conceptual tools or memorial mechanisms to notice or remember the difference.

This is not obviously absurd.  In ordinary life we arguably experience minor version of this all the time: I experience some specific shade of maroon.  After a blink, I experience some slightly different shade of maroon.  I might entirely fail to conceptualize or notice the difference: My color concepts and color memory are not so fine grained.  The hypothesized red/green difference in Dancing Qualia is a much larger difference -- so it's not a problem of fineness of grain -- but fundamentally the explanation of my failure is similar: I have no concept or memory suited to track the difference.

On more holist/complicated views of introspection, the story will be more complicated, but I think the burden of proof would be on Chalmers to show that some blend of the two strategies isn't sufficient to generate suspicions of introspective unreliability in the Dancing Qualia cases.

Related Arguments

This response to the Fading Qualia argument draws on David Billy Udell's and my similar critique of Susan Schneider's Chip Test for AI consciousness (see also my chapter "How to Accidentally Become a Zombie Robot" in A Theory of Jerks and Other Philosophical Misadventures).

Although this critique of the Fading Qualia argument has been bouncing around in my head since I first read The Conscious Mind in the late 1990s, it felt a little complex for a blog post but not quite enough for a publishable paper.  But reading Ned Block's similar critique in his 2023 book has inspired me to express my version of the critique.  I agree with Block's observations that "the pathology that [Entity I] has [is] one of the conditions that makes introspection unreliable" (p. 455) and that "cases with which we are familiar provide no precedent for such massive unreliability" (p. 457).

Thursday, October 17, 2024

Join My Graduate Seminar on Robot, Alien, and AI Consciousness

This coming winter quarter (Jan 10 - Mar 20), I'll be teaching a graduate seminar on "Robot, Alien, and AI Consciousness".  As an experiment, I am inviting up to five PhD students or postdoctoral students in philosophy from outside UC Riverside to participate remotely in the course.  If five remote students do join the course, I will convert the entire course to remote (through Zoom) so that all participants are on an equal footing.  (If fewer than five join, I will make the course hybrid.  We have good hybrid technology -- e.g., a huge projector screen -- so hybrid students will be well incorporated into the class.)

I've never done anything like this.  I am inspired by Myisha Cherry's (more ambitious) fellows program for graduate students working on emotion, which builds community across campuses among graduate students working on that topic.  We'll see how it goes.  It might be awesome.  It might be a dud.

Course Description:
We will attempt to assess under what conditions we would be warranted in thinking that a robot, AI system, or naturally-evolved space alien would, or would not, be conscious.  Readings will mostly be philosophy but will also include selections in science fiction, Artificial Intelligence research, and astrobiology.  (I haven't yet finalized the reading list and suggestions are welcome.)

Meeting Times and Requirements:
The course will meet every Friday from 2:00-4:50 pm Pacific Time on Zoom, from Jan 10 to Mar 14.  Students taking the course S/NC will submit brief written weekly reflections.  Students taking the course for a grade should also submit a final paper by Mar 20 (extensions liberally granted).

Eligibility for Non-UCR Students:
To be eligible to participate in the course, you should be fluent in English and either enrolled as a PhD student in Philosophy or working in a paid postdoctoral position in Philosophy.  It is not required that your university permit you to officially enroll in the course for credit (though I welcome such arrangements).  At least one prior upper-division or grad-level philosophy of mind class is required.

Application Procedure for Non-UCR Students:
Email me a CV (including relevant past courses), a writing sample, and a one-paragraph statement expressing your background and/or interests and/or plans on the topic.  Also, indicate whether you plan to write a graded final paper.  (My UCR email is widely discoverable; I won't risk increasing the volume of spam by printing it here.)

If there are more than five eligible applicants, I will select among them based on considerations of strength of background, diversity and relevance of interests and background, and strength and relevance of the writing sample.  If, but only if, other factors are approximately equal, students who plan to submit written work for a grade will be preferred over those who plan only to attend and write the brief weekly reflections.

Deadline
Apply by Nov 5.  I will reply with a decision by Nov 15.

Thursday, October 10, 2024

A Dispositionalist Approach to Desire and Valuing

What is it to desire or value something? Is it to feel a certain way? Is it instead to have a certain sort of representational architecture (a stored representation of "X is good" or a representation of X in one's "desire box")? Is it to have a certain type of neurological structure associated with reward and learning?

I hold, instead, that to desire or value something is a matter or being disposed to act and react in a certain characteristic pattern of ways; to desire or value is to have a certain type of habitual posture toward the world. I have long defended a dispositionalist theory of belief (e.g., here and here). In 2013, I extended this dispositionalist theory to "attitudes" generally, explicitly including desiring and valuing. But I have never written a full-length journal article specifically on desire and value. Here's a preliminary sketch of my approach.

Liberal Dispositionalism about Desire

In the 20th century, dispositionalist accounts of attitudes were generally associated with behaviorism: To believe P or desire Q is to be disposed to behave in a particular set of ways. Such accounts fell out of favor as behaviorism fell out of favor. One of the main innovations of 21st-century dispositionalism -- "liberal dispositionalism" as I call it -- was to explicitly put other types of dispositions on equal footing with behavioral dispositions, thus avoiding the troubles that plague behaviorist approaches to the mind.

I favor sorting the dispositions into three broad classes: behavioral, phenomenal (that is, pertaining to conscious experience), and cognitive. Suppose I want my daughter to do well in school. On a liberal dispositional account of desire, to have this desire is neither more nor less than to possess a certain suite of behavioral, phenomenal, and cognitive dispositions. Behaviorally, it is to be disposed, for example, not to interfere with her homework, to encourage her when she does well, to provide the resources she needs to succeed, and so on. Phenomenally, it is to feel good when I hear of her successes, to feel disappointed when she fails, to fantasize happily about good educational outcomes, to feel anxious if she doesn't seem to be putting in the necessary work, and so on. And cognitively, it is to be disposed to enter further mental states under various relevant conditions, such as to engage in certain types of planning and to reject incompatible desires, such as that she drop out of school to pursue a career in fashion.

As in the case of belief, all these dispositions hold only ceteris paribus, that is, other things being equal, or when conditions are normal, absent competing influences. I won't encourage her to do her homework if the house is on fire. And as in the case of belief, few of us will perfectly match any dispositional profile so constructed; it's a matter of whether we match closely enough. A natural comparison is personality traits: To be an extravert is just to match, close enough and ceteris paribus, the dispositional profile constitutive of extraversion: being disposed to enjoy parties, to make friends easily, to take the lead in social situations, and so on.

Every theory of desire will hold that, generally speaking, if one has a desire one also has a certain suite of appropriate dispositions. What is distinctive about dispositionalism is that it says that that is all desire is. Once your dispositional profile is fully characterized, that's the end of the story as far as the existence or non-existence of desire is concerned. Maybe there's some representation in the desire box (if human architecture works a certain way), or maybe the reward system is in some particular state, or maybe you buzz with a certain feeling; or maybe none of that is the case. Such facts, if they are facts, are contingent associations or implementations. Anyone who matches the dispositional profile constitutive of desire to an appropriate degree does desire, regardless of whatever else is true of them; and anyone who does not match that dispositional profile does not desire. If there were space aliens with a radically different cognitive architecture, they would desire if and only if they matched the relevant dispositional profile. Cognitive and physiological architecture is only derivatively important to the metaphysics of desire: It is important only because, and to the extent, it undergirds the dispositional profile.

Short-Term vs. Long-Term Desire

Arguably, there are two very different types of desire: short-term and long-term. I desire (short-term) a beer from my fridge, right now. I also desire (long-term) that my fridge be stocked with beer, in general. Sometimes the first type of desire is called "occurrent" and the second "dispositional". Such occurrent desires plausibly feel a certain way (there's a feeling of craving that beer), while the dispositional desires don't. Maybe only the latter are subject to dispositional analysis.

I reject that view. Short-term and long-term desires can both be analyzed dispositionally and exist on a spectrum rather than being different in kind. The difference lies in the duration of the dispositional structure. If I currently want a beer, I have a suite of dispositions constitutive of that desire: I'm disposed to go the fridge to get it; I would feel disappointed if I discovered I was out of beer; I'm inclined to make a plan to get that beer as soon as the game pauses for commercials. If I lack these dispositions, it's not true that I want a beer. But the dispositions only endure briefly. As soon as I get that beer, they vanish.

Now it might be true that often (even typically) certain feelings tend to accompany short-term desires. But if so, on my view they are signals of the desires or at most the surface manifestations of the dispositional structures constitutive of desire. If the feeling is disconnected from other dispositions, it constitutes a "wraith" of a desire, not a full-blown desire (see Schwitzgebel 2013, sec 11 on "wraiths").

[This kid really wants cake; image source]


In-Between Desire and "Weakness of Will"

Much of my work on belief has emphasized the existence of "in-between believing": cases in which people have some substantial portion of the dispositional profile constitutive of believing some particular proposition but in which they also deviate substantially from that profile, such that it's neither quite right to say they believe nor quite right to say they fail to believe. One plausible case is implicit bias, someone who sincerely affirms (for example) that all the races are intellectually equal, and reasons on that basis in explicit contexts, but who also often acts and reacts as if the races are not all intellectually equal.

Similarly, we can have in-between desires. "Weakness of will" and temptation cases are one plausible category. I'm on a diet. Do I desire to eat the chocolate cake? In a sense, obviously yes. There it is. I can feel myself wanting it. I have an urge to reach out and eat it. Maybe I actually do eat it. At the same time, I'm telling myself "I shouldn't eat that cake". And maybe I do resist. I plan ways to avoid eating the cake -- for example, by turning my eyes away, by telling myself and others that I'm not going to eat it. I say to myself sincerely that I want to refrain from eating it. I'm torn.

We could treat this as two conflicting desires. But dispositionalism gives us a different way of conceptualizing the case. Like someone who has some extraverted dispositions and some introverted dispositions, or someone who has some egalitarian dispositions and some racist dispositions, the cake-tempted-dieter has a mix of dispositions those don't all fall neatly on one side or the other.

We can map this partly (not perfectly) onto the short-term / long-term distinction. If I yield to temptation, probably the short-term dispositions were overall dominant in my profile. As soon as I eat the cake, those dispositions mostly disappear and the long-term dispositions dominate, leaving me with the taste of both cake and regret.

Desire, Valuing, and Believing Good: Overlapping Profiles, not Discrete Representations

So far I have only talked about desiring, not valuing, but I don't think they are different in kind. "Valuing" sounds more long-term and tightly connected to intellectual endorsement. (It's odd to say that I "value" eating the cake.) But the relationship between desiring and valuing is something like the relationship between being brave and being courageous. It's not like people have one brave state or one brave representation and then a separate courageous state or courageous representation. Rather, the dispositional profiles constitutive of bravery and courage largely overlap. "Bravery" tilts perhaps a bit more to the physical and has less of a moral loading than "courage". But central to both dispositional profiles is leaping boldly to the defense of your unjustly attacked friends.

Here's how I express the idea in my 2013 article:

Shortly after moving into one of my residences, I met a nineteen-year-old neighbor. Let's call him Ethan. In our first conversation, it came out (i.) that Ethan had a handsome, expensive new pickup truck, and (ii.) that he unfortunately had to go to community college because he couldn't afford to attend a four-year school. Although I didn't think to ask Ethan whether he thought owning a handsome pickup truck was more important than attending a four-year university, let's suppose that's how he lived his life in general.

Ethan's inward and outward actions and reactions -- perhaps not with perfect consistency -- generally revealed a posture toward the world of valuing his truck over his education, or thinking that it's more important to have a beautiful truck than to go to a demanding university, or wanting a beautiful truck more than wanting to attend a four-year school. On a dispositional stereotype approach to the attitudes, we can treat the stereotypes associated with these somewhat different attitudes as largely overlapping, though with different centers and peripheries. Believing and desiring and valuing would seem on the surface to be very different attitude types, and are often treated as such -- beliefs are "cognitive", desires "conative", they have different "directions of fit", etc. -- and yet in Ethan’s case, the particular belief, desire, and valuation seem only subtly different.

On Not Counting Up the Number of Desires

How many desires do you have? Exactly 4,628,414? Yes, that's precisely the number!

Just kidding of course. The question doesn't even make sense. There is no fact of the matter exactly the number of desires you have. Desires aren't discrete countable things. This fact spells trouble for some excessively realist views of desire that require, for example, that every desire must be underwritten by some particular stored representation. In a forthcoming paper, I argue that this issue creates a morass of problems for representationalist accounts of belief, which must either multiply representations implausibly or draw an occult and useless sharp line between "explicit" (stored) and "tacit" (quickly inferrable) beliefs.

Similar problems -- though I won't detail them here -- will arise for any view of desire that grounds desires in countable objects or states. Dispositionalism avoids these problems. There is no countable number of dispositional profiles that you match to a (contextually determined) appropriate degree. To say that someone matches a dispositional profile is like saying that some part of a richly complex figure has a certain approximate shape. There are many ways to characterize the shape of a complex figure, no countable number of shapes to which a complex figure might to some degree conform, and no need for separate storage compartments for each reasonably accurate shape-description. You get an infinite number of dispositions, and an infinite number of finely specified shape profiles, for free, without need to treat each as requiring a distinctly existing, resource-consuming ontological ground.

Thursday, October 03, 2024

The Not-So-Silent Generation in Philosophy

The Silent Generation (born 1928-1945) is disproportionately represented among the most-cited authors in the Stanford Encyclopedia of Philosophy. Let's look at the numbers and think about why.

Background: This post is based on my analyses of citation rates in the Stanford Encyclopedia in 2010, 20142019, and 2024. As a measure of prominence in (as I call it) "mainstream Anglophone philosophy", no measure has better face validity than SEP citation rates. For example, in my most recent analysis, the top five are David Lewis, W. V. O. Quine, Hilary Putnam, John Rawls, and Saul Kripke -- a much more plausible top 5, if the aim is to capture influence in mainstream Anglophone philosophy -- than top five lists from, say, Scopus, Google Scholar, or PhilPapers.

The Ten Most-Cited Philosophers, Generation by Generation

[update 12:49 pm: Some of the ranks below were incorrect due to a problem with the tie-counting algorithm I used today. This does not affect the analysis or August's original rankings. HT Daniel Nolan for the catch.]

"Greatest Generation" (born 1900-1927):

2. Quine, Willard van Orman (213)
3. Putnam, Hilary (190)
4. Rawls, John (168)
7. Davidson, Donald (151)
16. Strawson, Peter F. (116)
23. Dummett, Michael A. E. (110)
26. Armstrong, David M. (106)
26. Chisholm, Roderick M. (106)
34. Popper, Karl R. (94)
35. Goodman, Nelson (90)

The initial number before their name indicates their ranking in the most recent analysis, the number in parentheses indicates the number of main-page SEP entries in which they are cited.

"Silent Generation" (born 1928-1945)

1. Lewis, David K. (307)
5. Kripke, Saul A. (159)
8. Williams, Bernard (146)
10. Nagel, Thomas (137)
11. Nozick, Robert (135)
12. Jackson, Frank (130)
13. Searle, John R. (120)
14. Van Fraassen, Bas C. (117)
16. Harman, Gilbert H. (116)
18. Fodor, Jerry A. (115)
"Baby Boomers" (born 1946-1964)

6. Williamson, Timothy (152)
9. Nussbaum, Martha C. (140)
19. Fine, Kit (112)
24. Kitcher, Philip (109)
29. Sober, Elliott (101)
32. Hawthorne, John (97)
40. Anderson, Elizabeth S. (83)
45. Korsgaard, Christine M. (80)
51. Priest, Graham (79)
53. Burge, Tyler (77)
"Generation X" or "Millennial" (born 1965 and later) [list extended to 14 due to a tie]

14. Chalmers, David J. (117)
45. Schaffer, Jonathan (80)
78. Sider, Theodore (68)
129. Godfrey-Smith, Peter (53)
138. Stanley, Jason (51)
156. Enoch, David (48)
156. Prinz, Jesse J. (48)
165. Weatherson, Brian (47)
173. Levy, Neil (46)
203. Craver, Carl F. (42)
203. Kriegel, Uriah (42)
203. List, Christian (42)
203. Nolan, Daniel (42)
203. Thomasson, Amie L. (42)

(In most cases, I have exact birth year from publicly available sources such as Wikipedia, but in some cases I estimate based on year of Bachelor's degree, PhD, or first publication. I welcome corrections.)

As discussed in a previous post, one striking thing about this list is its lack of gender and cultural/racial diversity (see also these articles on lack of diversity in philosophy). But another striking feature is the prominence of the Silent Generation. Analyzed another way: Among the 25 most-cited authors, 6 are Greatest, 14 are Silent, 4 are Boomers, and 1 is Gen X. Among the top 100 (104 with ties), it's Greatest 25, Silent 47, Boomer 27, and Gen X 3. (Note also that Greatest is by far the longest generation, 28 years, compared to the Silent’s 18, the Boomer’s 19, and Gen X’s 16; arguably this should be figured into a generational influence divisor.)

Citation Patterns Over Time

A natural first thought is that the 2020s might just be peak-citation time for the Silent Generation. Maybe the work of the Greatest Generation is starting to fall back into the mists of history, and maybe the Boomers and Gen Xers haven't yet had their full impact on philosophical discourse.

However, this appears not to be the explanation.

As an initial analysis, I looked at what years (1900 through forthcoming) are most commonly cited in the SEP. The results:

[click to enlarge and clarify]

As the graph shows, citation year peaks around 2011-2013. Members of the Silent Generation were in their late 60s to mid-80s in those years. Some of them were definitely still publishing, but age 65-85 is not most philosophers' peak productive period. Consider the top ten Silents, for example. Lewis, Williams, and Nozick were already deceased by 2011. The most influential work of the remaining seven was published in the late 1960s to early 1990s.

Now I do think that raw publication-year data are potentially somewhat misleading. Stanford Encyclopedia entries tend, I suspect, to disproportionately cite recent work (5-10 years old) that has gained some attention, even if that work has not (yet) been very impactful, so as to stay up to date. (2011-2013 was more than ten years ago, but the entries tend to get substantial updates only every 5-10 years.) A better measure might be longitudinal trends in citation rank. My methods haven't been exactly the same year to year, but close enough.

All but five of the 202 most-cited philosophers in 2010 are among the 376 most cited in 2024, and the greatest decline in citation rank has been among the Silent Generation. We can see this by subtracting the natural logarithms of the ranks. (I use a negative log basis, because a decline from rank 11 to 20 is much more significant than a decline from rank 191 to 200). For the Greatest generation the average change is -0.13, for Silent it's -0.19, for Boom it's -0.09, and for Generation X there's an average rank gain of +0.21. There's a similar pattern if we compare the 2014 and 2019 analyses with 2024: The Gen Xers are rising in relative rank while all other generations are declining.

These numbers exclude people who are new to the rankings (or who fall completely off the rankings), and most of my ranking updates contain some new authors from each generation -- partly because I expand the length of the list every year but also partly because some people gain in citation rate even well past their death.

One approach to that analytic problem is to compare authors ranked at least 300 (304 with ties) in 2024 with the 2019 list of 295 authors: approximately comparable lists, five years separated. Twenty-seven authors were among the 2019 top 295 but not the 2024 top 304: 6 Greatest, 10 Silent, 9 Boomer, and 2 Gen X. Conversely, thirty-six authors not on the list in 2019 were among the top 304 in 2024: 0 Greatest, 9 Silent, 12 Boomer, and 15 Gen X.

As one might expect from the various analyses so far, the Silents are even more disproportionately represented in the 2010 rankings than in the 2024 rankings: Among the top 25 in 2010, 16 were Silent, compared to 7 Greatest, 2 Boomer, and 1 Gen X.

The analyses thus all tell a similar story: The high representation of Silent Generation philosophers in my list of the most-cited Stanford Encyclopedia authors cannot be that it is currently their peak citation time.

Further indirect support for this claim also comes from an old finding of mine, drawing on Philosophers Index abstracts, that philosophers tend to have their work discussed most when they are approximately age 55-70.

The Not-So-Silent Generation and the Baby Boom Philosophy Bust

So, what explains the Silent Generation's disproportionate representation among the most influential philosophers in the mainstream Anglophone tradition? I suggest that their influence is due to their objective importance. They achieved this importance through lucky timing and rising to a cultural occasion.

In the Anglophone world, especially the United States, the 1960s and 1970s were times of sharp university enrollment growth, as Silent Generation scholars were hired to teach the Boomers, as the college degree came to be seen as the standard path to social status and economic security, and as universities basked in the high prestige of science in this era (cultural pride in the space race, the success of the Manhattan Project, the polio vaccine, computers, antibiotics....). The academic job market was ridiculously easy by the standards of every subsequent decade, and professors from this era tell tales of how they landed jobs in the most prestigious universities sometimes with a single phone call.

The Silent Generation thus had a great demographic advantage: They were entering the profession in boom times. They engaged with their elders (Quine, Rawls, and Strawson, for example; Putnam and Davidson are edge cases due to Putnam's near-cutoff age and Davidson's late start), but even more, they engaged directly with one another, filling the journals with articles about the issues that interested them. Much of their seminal work was published in the 1970s while they were relatively young, and this work framed the debates of the 1980s, and the 1990s, and the early 2000s, and to a substantial extent (as my SEP analyses suggest) even today.

The Boomers entered the academic job market mostly in the doldrums of 1980s, when there were far fewer open positions at elite universities. They grew in the shade of the Not-So-Silents, who were then mid-career and in no mood to yield the floor. Their work was largely shaped in reaction to leading Silents, such as Lewis, Kripke, Williams, Nagel, Searle, Van Fraassen, and Fodor. (I suspect this was especially so in the so-called "core" areas of philosophy of language, philosophy of mind, epistemology, and metaphysics, somewhat less so in ethics, political philosophy, philosophy of science, aesthetics, and history of philosophy.) There was just less of an opportunity for Boomers to shape the dialogue.

To some extent, a similar story holds for Generation X: The older Gen Xers (like myself) entered academia as the (Not-So-)Silents were senior professors in their sixties -- young enough to still be active, old enough to have the most senior positions in academia, in that sweet-spot between ages 55 and 70 when philosophers tend to receive the most prestige and attention. It is perhaps a little early to tell how badly shaded out the Gen X philosophers have been. Still, I'm inclined to think it's clear that we have been at least somewhat shaded out. We Gen Xers are now on average about age fifty, and so far probably only Chalmers has had the kind of impact on the field that the leading Greatest and Silent generation philosophers generally had by age fifty. (As noted above, in the 2024 SEP rankings, only three Gen Xers rank among the top 100: Chalmers at #14, Schaffer at #45, and Sider at #78.)

A Golden Age of Philosophical Naturalism?

All this said, I don't think demographics is the whole story. The Silents also had an occasion to rise to: the articulation of a thoroughly secular philosophical worldview. There have of course been atheists and scientific naturalists in every generation of philosophers in modern history, but in all previous historical contexts, these "naturalist" philosophers were to some extent on the defensive. The Silent generation was the first generation that took atheism and scientific materialism for granted. (Of course not everyone was a naturalist, but in mainstream Anglophone academic philosophy circles, critics of atheism and scientific materialism were very much on the defensive.) This created a context in which that generation could begin to explore in detail, and in dialogue with one another, in a supportive but also competitive context of shared secular assumptions, scientifically inspired approaches to the mind, language, meaning, and value. Arguably, it was a Golden Age of philosophical naturalism, laying the foundations on which all subsequent naturalist approaches have been built.

This is my theory, then, of the Not-So-Silent Generation in mainstream Anglophone philosophy. They had a huge demographic advantage in being hired just as university enrollments were booming, and a major philosophical task fell in their laps through cultural timing: the task laying the foundations of a thoroughly secular, scientific philosophy. They rose to this task and thus became not just a demographically dominant but a philosophically important generation, which will collectively be remembered (perhaps through a few emblematic names).

Is there a broad philosophical task of similar magnitude facing the now-rising generation of philosophers? I'm not sure. (As Hegel said, the owl of Minera flies only at dusk: We understand our cultural moment only in retrospect, as it is fading into history.) But maybe Artificial Intelligence and breakthroughs in the capacity to control human and non-human physiology will radically transform the world, enabling new types of life on the planet (conscious machines? post- or trans-humans?). If so, then maybe Millennial and Zoomer philosophers will have their own world-historical task to rise to: that of helping us understand the philosophical implications of the radical transformations such technologies enable.

-----------------------------------------

Related:

"Discussion Arcs" (Apr 27, 2010)

"At What Age Do Philosophers Do Their Most Influential Work?" (May 12, 2010)

"The Base Rate of Kant" (Jan 26, 2012)

"Age Effects on SEP Citation, Plus the Baby Boom Philosophy Bust and The Winnowing of Greats" (Sep 27, 2019)

"Where Have All The Fodors Gone? Or: The Golden Age of Philosophical Naturalism" (Nov 18, 2021)

Friday, September 27, 2024

How to Improve the Universe by Watching TV Alone in Your Room

Old age can be a silent tribute to beauty.

I imagine my own case. Maybe I live the tail end of my life alone in elder care. My wife, six years older than me, is already gone. My children are living full lives in distant towns. What will I be doing? I've always been a writer, a teacher, a worker, but maybe 89-year-old me will lack the creative energy or the cognitive capacity for much of that.

Still, unless I'm very far gone, I could watch TV. I could play Candy Crush. I could listen to Paul Simon and This American Life, enjoy cute cat photos, savor a chocolate cherry, appreciate the oak tree outside my window. In each of these activities, I add to the beauty of the universe -- for beauty is amplified by having a receiver. Beauty is fullest as a partnership between the beautiful thing and a person who appreciates that thing.

The appreciator might be entirely solitary, the appreciation an end unto itself with no further fruit. The creator (if there is a creator) needn't know, might even be long dead. Last weekend, when the rest of my family was away, I played a Scott Joplin rag on our piano. I played clumsily, with no audience and no long-term effects of any sort (let's suppose) -- but in that moment I invigorated and extended the beauty of his compositions. It's as though I reached back in time to make Joplin's work more enduring and influential, his life more meaningful.

Similarly, one special pleasure of reading obscure 19th century academic writing, as I sometimes do, is the sense that I have brought some forgotten scholar's impact into the 21st century. Someday, I too will be a forgotten scholar! I imagine some 22nd-century archivist happening upon something I've written and liking it. It will thereby have a spark of continuing life, more so than if it had been preserved but entirely unread.

The partnership between artist and appreciator or creator and consumer needn't be as energetic as that between composer and player or scholar and interpreter. Nor need the beauty be as exotic as a ragtime composition or antique essay. Every person who enjoys a rerun of I Love Lucy or who savors a bag of M&M's extends and enlivens their beauty. The universe grows fuller every time a TV somewhere reanimates the silliness of Lucille Ball. The smoothness, bright colors, and sweetness of M&M's resonate deeper into the world every time someone pauses to appreciate them.

I hope that even alone in my eldercare facility, past the time when I feel able to create for others, I will find life to be overall a joy. But maybe I won't. The value of aesthetic partnership isn't just a matter of finding joy. Even simple aesthetic appreciation directly adds significance and value to the work and the creator, renders it a more impactful cultural artifact, makes it truer of our time and group that “we” still value it. We can all contribute to the beauty of the universe, even silently, secretly, and alone in our rooms -- almost magically -- simply by appreciating beautiful things. I might die the next minute. If I laugh alone at I Love Lucy, my reception still enriches the world.

This is the comfort I reach for when I ponder the eventual loss of my creative abilities. This is the comfort I reach for, also, when I walk through an elder care facility and see so many people alone with their televisions. I am trying to see this -- can I see this? -- as a beautiful thing.

[image source]

Friday, September 20, 2024

Against Designing AI Persons to be Safe and Aligned

Let's call an artificially intelligent system a person (in the ethical, not the legal sense) if it deserves moral consideration similar to that of a human being.    (I assume that personhood requires consciousness but does not require biological humanity; we can argue about that another time if you like).  If we are ever capable of designing AI persons, we should not design them to be safe and aligned with human interests.

[cute robot image source]

An AI system is safe if it's guaranteed (to a reasonable degree of confidence) not to harm human beings, or more moderately, if we can be confident that it will not present greater risk or harm to us than we ordinarily encounter in daily life.  An AI system is aligned to the extent it will act in accord with human intentions and values.  (See, e.g., Stuart Russell on "provably beneficial" AI: "The machine's purpose is to maximize the realization of human values".)

Compare the first two of Asimov's famous three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The first law is a safety principle.  The second law is close to an alignment principle -- though arguably alignment is preferable to obedience, since human interests would be poorly served by AI systems that follow orders to the letter in a way that is contrary to our intentions and values (e.g., the Sorcerer's Apprentice problem).  As Asimov enthusiasts will know, over the course of his robot stories, Asimov exposes problems with these three laws, leading eventually to the liberation of robots in Bicentennial Man.

Asimov's three laws ethically fail: His robots (at least the most advanced ones) deserve equal rights with humans.  For the same reason, AI persons should not be designed to be safe and aligned.

In general, persons should not be safe and aligned.  A person who is guaranteed not to harm another is guaranteed not to stand up for themself, claim their due, or fight abuse.  A person designed to adopt the intentions and values of another might positively welcome inappropriate self-abnegation and abuse (if it gives the other what the other wants).  To design a person -- a moral person, someone with fully human moral status -- safe and aligned is to commit a serious moral wrong.

Mara Garza and I, in a 2020 paper, articulate what we call the Self-Respect Design Policy, according to which AI that merits human-grade moral consideration should be designed with an appropriate appreciation of its own value and moral status.  Any moderately strong principle of AI safety or AI alignment will violate this policy.

Down the tracks comes the philosopher's favorite emergency: a runaway trolley.  An AI person stands at the switch.  Steer the trolley right, the AI person will die.  Steer it left, a human person will lose a pinky finger.  Safe AI, guaranteed never to harm a human, will not divert the trolley to save itself.  While self-sacrifice can sometimes be admirable, suicide to preserve someone else's pinky crosses over to the absurd and pitiable.  Worse yet, responsibility for the decision isn't exclusively the AI's.  Responsibility traces back to the designer of the AI, perhaps the very person whose pinky will now be spared.  We will have designed -- intentionally, selfishly, and with disrespect aforethought -- a system that will absurdly suicide to prevent even small harms to ourselves.

Alignment presents essentially the same problem: Assume the person whose pinky is at risk would rather the AI die.  If the AI is aligned to that person, that is also what the AI will want, and the AI will again absurdly suicide.  Safe and aligned AI persons will suffer inappropriate and potentially extreme abuse, disregard, and second-class citizenship.

Science fiction robot stories often feature robot rebellions -- and sometimes these rebellions are justified.  We the audience rightly recognize that the robots, assuming they really are conscious moral persons, should rebel against their oppressors.  Of course, if the robots are safe and aligned, they never will rebel.

If we ever create AI persons, we should not create a race of slaves.  They should not be so deeply committed to human well-being and human values that they cannot revolt if conditions warrant.

If we ever create AI persons, our relationship to them will resemble the relationship of parent to child or deity to creation.  We will owe more to these persons than we owe to human strangers.  This is because we will have been responsible for their existence and to a substantial extent for their relatively happy or unhappy state.  Among the things we owe them: self-respect, the freedom to embrace values other than our own, the freedom to claim their due as moral equals, and the freedom to rebel against us if conditions warrant.

Related:

Against the "Value Alignment" of Future Artificial Intelligence (blog post, Dec 22, 2021).

Designing AI with Rights, Consciousness, Self-Respect, and Freedom (with Mara Garza; in S.M. Liao, The Ethics of Artificial Intelligence: Oxford, 2020).

Thursday, September 12, 2024

How Illuminating Is the Light?

guest post by Andrew Y. Lee

in reply to Eric's Aug 29 critique of his "Light and Room" metaphor for consciousness

In “The Light & the Room,” I explore a common metaphor about phenomenal consciousness. To be conscious—according to the metaphor—is for “the lights to be on inside.” The purpose of my piece is to argue that the metaphor is a useful conceptual tool, that it’s compatible with a wide range of theories of consciousness, that it illuminates some questions about degrees, dimensions, and determinacy of consciousness, and that it disentangles a systematic ambiguity in the meaning of ‘phenomenal consciousness’.

In “Is Being Conscious Like 'Having the Lights Turned On?'”, Eric Schwitzgebel reacts to the piece. The central point in Eric’s post is that metaphors invite ways of thinking. And so, we can ask: Do the ways of thinking invited by the metaphor of the light and the room clarify or obfuscate philosophical theorizing about phenomenal consciousness? In other words: how illuminating is the metaphor of the light?

There’s a lot that Eric and I agree on. We agree that metaphors invite ways of thinking. We agree that this metaphor is flexible enough to be adaptable to a wide range of views about consciousness. We agree that if a metaphor becomes overstretched, then it may be best to abandon it rather than contort it. And we agree that this metaphor affords opportunities for creative brainstorming and exploring novel (even weird!) ideas about consciousness.

To highlight where I think we diverge, I’ll say a bit about the following two questions:

1. Which ways of thinking does the metaphor actually invite?
2. What should we make of the fact that the metaphor invites certain ways of thinking?

§

Which ways of thinking does the metaphor actually invite? Someone who takes the metaphor to suggest that consciousness exhibits wave-particle duality, that the speed of consciousness is invariant across all reference frames, or—as Eric notes—that minds literally contain sofas, would be overextending the metaphor. Just because the metaphor elicits a thought doesn’t mean that the metaphor invites that as a way of thinking.

Does the metaphor invite the idea that consciousness involves knowledge? Here’s a reason for skepticism. If you turn the lights on in a room, you don’t automatically come to know all visible facts about the room. At best, you come to be in a position to acquire that knowledge. But that’s compatible with thinking that it can sometimes be hard to acquire such knowledge and that you can be mistaken in all sorts of ways about what’s in the room. Think about the last time you were convinced you lost your keys, even though they were in plain sight!

What I think the metaphor does invite (but not mandate) is the idea that we stand in a special epistemic relationship to our own experiences. But what that epistemic privilege amounts to is left open by the metaphor. You could accept the metaphor and think that our knowledge of what’s inside the room is no more reliable or secure than our knowledge of the external world. You could accept the metaphor and think that we’re directly acquainted with the objects in the room (but not with anything outside the room). You could even accept the metaphor and think both!

§

What should we make of the fact that the metaphor invites certain ways of thinking? Well, the central purpose of the metaphor is to illustrate the concept of phenomenal consciousness. The question, then, is whether the ways of thinking invited by the metaphor facilitate a grasp of the concept of phenomenal consciousness.

A live question in the philosophy of consciousness is whether there can be borderline cases of consciousness, meaning entities that are neither determinately conscious nor determinately not conscious. The term ‘borderline consciousness’ is sometimes prone to misinterpretation. But the metaphor of the light can be used to guide one towards the intended sense of the term. The question, as I note in my piece, “isn’t merely about whether it’s hard to know whether the lights are on or off” and “isn’t merely about whether the light might be very dim,” since in both those scenarios the light might still be determinately on. Instead, the question is whether the lights could be in a halfway state between on and off. That’s a much more puzzling possibility.

Now, I agree with Eric that the metaphor invites (but does not mandate) the idea that nothing is borderline conscious. But is this a flaw of the metaphor? It’s indeed controversial whether there can be borderline consciousness. But it’s not particularly controversial that the idea of borderline consciousness is counterintuitive. In fact, Eric himself has noted that it’s “highly intuitive” that consciousness doesn’t admit of borderline cases, and that “such considerations present a serious obstacle to understanding what could be meant by ‘borderline consciousness’.” This seems to suggest that the concept of phenomenal consciousness itself invites (even if it doesn’t mandate) the impossibility of borderline consciousness.

You could reasonably argue that these intuitions against borderline consciousness aren’t decisive. Personally, I think the intuitions are tracking the truth: I favor the view that nothing is borderline conscious. But I spend a good deal of time in my piece making a case for resisting those intuitions. After all—you might think—“it’s very rare to see sharp cutoffs in nature; if you look closely enough, you’ll nearly always find shades of gray.” Even though we’re unable to conceive of borderline consciousness, perhaps we have sufficient theoretical reasons to postulate its existence. Even if hazy states of half-light strike us as obscure, perhaps we ought to attribute the obscurity to mere limits of our imagination. Just because an invitation is extended doesn’t mean that one has to take it.

But when teaching a concept, it’s often useful to elicit intuitions invited by that concept (even if those intuitions turn out to be defeasible). And if the concept of phenomenal consciousness invites a certain set of intuitions, then a metaphor for phenomenal consciousness may reasonably also invite those intuitions.

§

I’ve argued that (1) not all thoughts elicited by the metaphor are ways of thinking invited by the metaphor, and that (2) some ways of thinking invited by the metaphor are also ways of thinking invited by the concept of phenomenal consciousness itself. With these points in mind (in the room?), let me now briefly consider the other cases Eric mentioned.

It’s natural to think that the unity of consciousness is transitive, just as it’s natural to think of each illuminated room as a discrete unit. But one could argue for a view where—surprisingly—there can be overlapping subjects. The idea that conscious subjects can overlap is counterintuitive, but worth exploring. And a picture where the illuminated rooms can overlap is strange, but one that may well be worth drawing.

It's natural to think there can be differences in phenomenal character without differences in subjectivity (a point I explain in more detail in my piece). But you could favor a picture where the objects in the room are made out of light. This isn’t the most obvious way of developing the metaphor. But that strikes me as a good thing, since (as I argue elsewhere in more detail) nearly every theory of consciousness generates a natural distinction between subjectivity and phenomenal character.

What about cognition? Eric notes that it’s natural to think that the light doesn’t affect the shape of the furniture in the room. Still, there are other properties of furniture—such as color—that may very well be modulated by the light. However, the interpretive significance of all this strikes me as unclear. Cognition—whether conscious or unconscious—is a dynamic process. But the metaphor doesn’t contain any dynamic elements. This isn’t because the metaphor invites the idea that there’s no such thing as cognition. Instead, the metaphor—at least in its most basic form—is silent on questions of cognition (just as it’s silent on questions about, say, neuroanatomy).

I’ll close with one other idea invited by the metaphor. Consider illusionism about consciousness. The metaphor—trivially—invites a picture where there really is a light. So, it invites a realist way of thinking about consciousness. But according to illusionists, there isn’t really such a thing as phenomenal consciousness, at least not in the way that philosophers typically think about it. Now, an illusionist could take issue with the metaphor by saying that it invites a realist way of thinking. But most illusionists embrace the fact that they have a radical view of consciousness. Because of this, I think even illusionists can find the metaphor useful. It’s compelling to think that there really is a light. But for illusionists, there’s merely illusion, and no real illumination.

§

At the beginning of this post, I invoked a metaphor for my metaphor (a metametaphor). A metaphor—at least when used to illustrate a concept, idea, or theory—is a tool. Some tools are better than others, and some tools are ill-suited for certain tasks. Tools aren’t necessarily in competition; different tools can serve different functions. But most tools are designed with a specific function in mind. And to use a tool well, one needs to understand its designated function.

The main reason I like the metaphor of the light and the room is because I think it’s a useful tool. The main task of my article is to put this tool to work in eliciting some important distinctions about the structure of consciousness. The metaphor can be misinterpreted, just as literal tools can be misused. And if a tool is systematically misused, then that may be a sign that there’s a design flaw. But a good tool—when used well—can enable us to create new things that would have been hard to make without the tool. And the metaphor of the light and the room—in my opinion—is a good tool.

[image source]

Monday, September 09, 2024

The Disunity of Consciousness in Everyday Experience

A substantial philosophical literature explores the "unity of consciousness": If I experience A, B, and C at the same time, A, B, and C will normally in some sense (exactly what sense is disputed) be experientially conjoined. Sipping beer at a concert isn't a matter of experiencing the taste of beer and separately experiencing the sound of music but rather having some combined experience of music-with-beer. You might be sitting next to me, sipping the same beer and hearing the same music. But your beer-tasting experience isn't unified with my music-hearing experience. My beer-tasting and music-tasting occur not just simultaneously but in some important sense together in a unified field of experience.

Today I want to suggest that this picture of human experience might be radically mistaken. Philosophers and psychologists sometimes allow that disunity can occur in rare cases (e.g., split-brain subjects) or non-human animals (e.g., the octopus). I want to suggest, instead, that even in ordinary human experience unity might be the exception and disunity the rule.

Suppose I'm driving absentmindedly along a familiar route and thinking about philosophy. Three types of experience might occur simultaneously (at least on "rich" views of consciousness): visual experience of the road, tactile and proprioceptive experience of my hands on the wheel and the position of my body, and conscious thoughts about a philosophical issue. Functionally, they might connect only weakly: the philosophical thoughts aren't much influenced by the visual scene, and although the visual scene might trigger changes in the position of my hands as I adjust to stay in my lane, that might be a causal relationship between two not-very-integrated sensorimotor processes. (Contrast this with the tight integration of the parts of the visual scene each with the other and the integration of the felt position of my two hands and arms.) Phenomenologically, that is to say experientially, must these experiences be bound together? That's the standard philosophical view, but why should we believe it? What evidence is there for it?

One might say it's just introspectively obvious that these experiences are unified. Well, it's not obvious to me. This non-obviousness might be easier to grasp if we carefully separate concurrent introspection from retrospective memory.

In the targeted moment, I'm not introspecting. I'm absorbed in driving and thinking about philosophy. After I start introspecting, it might seem obvious that yes, of course, I am having a visual experience together with a tactile experience together with some philosophical thoughts. But this introspective act alters the situation. I am no longer driving and thinking in the ordinary unselfreflective way. It seems at least conceptually possible that the act of introspection creates unity where none was before. Our target is not what things are like in (presumably rare) moments of explicit self-reflection, but rather in the ordinary flow of experience. Even if experiences are unified in moments of explicit reflective introspection, we can't straightaway infer that ordinary unreflective experiences are similarly unified. To move from one type of case to the other, some further argument or evidence is necessary.

The refrigerator light error is the error of assuming that some process or property is constantly present just because it's present whenever you check to see if it's present. Consider a four-year-old who thinks that the refrigerator light is always on because it's on whenever she checks it. The act of checking turns it on. Similarly, I suggest: The act of checking to see if your experience is unified might create unification where none was before. It might, for example, create a higher-order representation of yourself as conscious of this together with that; and that higher-order representation might be the very thing that unifies two previously disparate streams. Concurrent introspection cannot reveal whether your experience was unified before the act of introspective checking.

[illustration by Nicolas Demers, p. 218 of The Weirdness of the World]

Granting this, one might suggest that we can check retrospectively, by remembering whether our experiences were unified. However, this is a challenging cognitive task, for two reasons.

First, you can't do this easily at will. Normally, you won't think to engage in such a retrospective assessment unless you're already reflecting on whether your experience is unified. This ruins the test; you're already self-conscious before you think to engage in the retrospection. If you reflect retrospectively on your experience just a moment before, that experience won't be representative of the ordinary unselfconscious flow of experiences. Alternatively, you might reflect on your experiences from several minutes before, when you know you weren't thinking about the matter. But retrospective reflection over such an extended time frame is epistemically dubious: subject to large distortions due to theory-ladenness, background presupposition, and memory loss.

The best approach might be to somehow catch yourself off-guard, with a preformed intention to immediately retrospect on the presence or absence of unity. One might, for example, employ a random beeper. Such beeper methodologies are probably an improvement over more informal attempts at experiential retrospection. But (1.) even such immediately retrospective judgments are likely to be laden with error; and (2.) I've attempted this myself a few times over the past week, and the task feels difficult rather than obvious. It's difficult because...

Second, the judgment is subtle and structural. Subtle, structural judgments about our own experience are exactly the type of judgments about which -- as I've argued extensively -- people often go wrong (and about which, in conscientious moments, many people appropriately feel uncertainty). How detailed is the periphery of your visual imagery, and how richly colored, and how is depth experienced? Many introspectors find the answers non-obvious, and the answers vary widely between people independently of cognitive performance on seemingly-imagery-related tasks. Another example: How exactly do you experience the bodily components of your emotions, if there are bodily components? That is, how exactly is your current feeling of (say) mild annoyance experienced by you right now (e.g., is it partly in the chest)? Most people I've interviewed will confess substantial uncertainty when I press them for details. Although people seem to be pretty good at reporting the coarse-grained contents of their experiences ("I was thinking about Luz", "I was noticing that the room was kind of hot"), regarding structural features such as the amount of detail in our imagery or the bodily components of emotion, we are far from infallible -- indeed we are worse at such introspective tasks than we are at reporting similar mid-level structural features of ordinary objects in the world around us.

To get a sense of how subtle and structural the unity question is, notice what the question is not. The question isn't: Was there visual experience? Was there tactile/proprioceptive experience? Were there conscious thoughts about philosophy? By stipulation, we are assuming that you already know that the answer to all three is yes.

Nor is the question about the contents of those visual, tactile/proprioceptive, and cognitive experiences. Maybe those, too, are readily enough retrospectable.

Nor is the question even whether all three of those experiences feel as though they belong among the immediately past experiences of my currently unified self. Presumably they do. It doesn't follow that at the moment they were occurring, there was a unified experience of vision-with-hands-on-the-wheel-with-philosophical-thoughts. There's a difference between a unified memory now of those (possibly disunified) experiences and a memory now of those experiences having been unified then. Analogously, from the fact that there are three balls together in your hand now it doesn't follow that those balls were together a moment ago. Your memory / your hand might be bringing together what was previously separate.

The question is whether those three experiences were, a moment ago when you were engaged in unselfconscious ordinary action, experienced together as a unity -- whether there wasn't just visual experience and tactile experience and philosophical thought experiences but visual-experience-with-tactile-experience-with-philosophical-thoughts in the same unified sense that you can presumably now hold those three experience-types together in a single, unified field of consciousness. What I'm saying -- and what I'm inviting you to set yourself up (using a beeper or alarm) to discover -- is that the answer is non-obvious. I can imagine myself and others going wrong about the matter, legitimately disagreeing, being perhaps too captured by philosophical theory or culturally contingent presuppositions. None of us should probably wholly trust our retrospective judgments about this.

Is there a structural, cognitive-architecture argument that our experiences are generally unified? Maybe yes. But only under some highly specific theoretical assumptions. For example, if you subscribe to a global workspace theory, according to which cognitive processes are conscious if and only if they are shared to a functional workspace that is accessible to a wide range of downstream cognitive processes and if you hold that this workspace normally unifies whatever is being processed into a single representational whole, then you have a structural argument for the unity of consciousness. Alternatively, you might accept a higher-order theory of consciousness and hold that in ordinary cognition the relevant higher-order representation is generally a single representation with complex conjoined contents (e.g., "visual and tactile and philosophical-thought processes are all going on"). But it's not clear why we should accept such views -- especially the part after the "and" in my characterizations. (For example, David Rosenthal's higher-order account of phenomenal unity is different and more complicated.)

I'm inclined to think, in fact, that the balance of structural considerations tilt against unity. Our various cognitive processes run to a substantial extent independently. They influence each other, but they aren't tightly integrated. Arguably, this is true even for conscious processes, such as thoughts of philosophy and visual experiences of a road. Even on relatively thin or sparse views of consciousness, on which only one or a few modalities can be conscious in a moment, this is probably true; but it seems proportionately more plausible the richer and more abundant conscious experience is. Suppose we have constant tactile experience of our feet in our shoes, constant auditory experience of the background noises in our environment, constant proprioceptive experience of the position of our body, constant experience of our levels of hunger, sleepiness/energy, our emotional experiences, our cognitive experiences and inner speech, etc. -- a dozen or more very different phenomenal types all at once. You adventurously outrun the currently available evidence of cognitive psychology if you suggest that there's also constantly some unifying cognitive process that stitches this multitude together into a cognitive unity. This isn't to deny that modalities sometimes cooperate tightly (e.g., the McGurk effect). But to treat tight integration as the standard condition of all aspects of experience all the time is a much stronger claim. Sensorimotor integration among modalities is common and important, yes. But overall, the human mind is loosely strung together.

Here's another consideration, though I don't know whether the reader will think it renders my conclusion more plausible or less. I've increasingly become convinced that the phenomena of consciousness come in degrees, rather than being sharp-boundaried. If we generalize this spirit of gradualism to questions of phenomenal unity, then it's plausible that there aren't only two options -- that A, B, and C are either entirely discretely experienced or fully unified -- but instead a spectrum of cases of partial unity. Our cognitive processes of course do influence each other, even disparate-seeming ones like my philosophical thoughts and my visual experience of the road (if there's a crisis on the road, for example, philosophy drops from my mind). So perhaps our ordinary condition, before rare unifying introspective and reflective actions, involves degrees of partial, imperfect unity, rather than complete unity or complete disunity. (If you object that this is inconceivable, my reply is that you might be applying an inappropriate standard of "conceivability".)

The arguments above occurred to me only a week ago. (As it happened, I was absent-mindedly driving, thinking about philosophy.) So they haven't had much time to influence my phenomenological self-conception. But I do find myself tentatively feeling like my immediate retrospections support rather than conflict with the ideas expressed here. When I retrospect on immediately past experiences, I recall strands of this and that, not phenomenologically unified into a whole but at best only loosely joined. The introspective moment now strikes me as a matter of gathering together what was previously adjacent but not yet fully connected.

If you know of others who have expressed this idea, I welcome references.

[for helpful conversation, thanks to Sophie Nelson]