[go: up one dir, main page]

Academia.eduAcademia.edu
20 Doubt, ignorance and trust On the unwarranted fears raised by the doubt-mongers Albert Ogien Introduction About ignorance in itself, there is little to say, as the word only states a matter of fact: knowing something or not knowing something. But ignorance is a concept each use of which implies a particular value judgment. One form of such a judgment derives from the mere existence of rationalized and literate modern societies. In such a context, ignorance can be contrasted with scientific knowledge to elicit a discrediting effect. This is what happens in the current debate about “doubt-mongers” who challenge the reality of the facts established by science concerning, for example, climate change, GMOs or the health effects caused by tobacco or bisphenol A (BPA). This defiance has been dubbed “agnotology,” that is the social construction of ignorance through the strategic and systematic expression of suspicion about scientific truths. This debate raises two questions: (1) does doubt in science foster ignorance; (2) does dispelling doubt in science require a proper command of scientific knowledge? These are the questions I will deal with in this chapter, first by analysing the relationship between doubt and ignorance as understood in relation to science, then by considering whether trust may serve, as often professed, as an antidote to doubt in science and with respect to scientists. Does doubt foster ignorance? Let us start from a somewhat rough distinction between the practice of doubt and the strategy of doubt. One can easily assume that the former is a constitutive component of the scientist’s ethos. Common sense usually attaches an attitude of rigor and humility to the concept of science. Accordingly, this attitude rests on two principles: (1) results must be constantly subjected to experimental verification; (2) any explanation depends on the present state of scientific knowledge and is always temporary (Chalmers 1987; Granger 1993). In short, epistemological scepticism is held to be an essential feature of scientific activity.1 Moreover, every scientist knows that the professional environment in which he or she works strives to continuously track and deal with issues of error, fraud, plagiarism or conflict of interests – sometimes resorting to legal avenues to resolve the most difficult cases. In brief, the practice of doubt is an institutional 192 Doubt, ignorance and trust fact since it operates as a golden rule that scientists can decide to infringe upon while knowing that this will not be without risk. The strategy of doubt is something else. It is an enterprise led by people who intend to contest, out of sheer interest or, sometimes, true despair, the results of a scientific research that jeopardizes economic or ideological interests or disproves an entrenched element of knowledge. Scientists who are associated with such a strategy are relatively few, a fact that must be emphasized, since failure to gather a large number of scientists to deny the validity of scientific data appears to be, in itself, sufficient proof that doubt is exploited to serve a cause which is alien to the work of science. Another proof is semantical in nature. When a strategy of doubt is a real strategy – that is, when it does not amount to a pathological resistance to new data invalidating old habits of thought – it makes use of unfamiliar rhetorical means: inconsistent arguments, allout disputes centered on ancillary or peripheral matters, plain denigration, focus on any avowed uncertainty, or ad hominen accusations (Wilholt 2013). An easy way to discriminate between a strategy of doubt and a scientific controversy is that the former regularly displays a deliberate intention to spurn any potential agreement on controversial issues. Scientists who are engaged in counteracting such a strategy by using the means of epistemological critique are wasting their time. Other tools need to be used – in particular those devised to nip in the bud the innuendo spread by rumor (Kapferer 2010). Which is what some scientists are already doing when they refuse to debate in the media with those colleagues who object to the results produced by, for example, the Intergovernmental Panel on Climate Change (IPCC). This attitude has even led the more fatigued among them to disregard the charge of dogmatism and censorship that their opponents formulate on behalf of the necessary practice of doubt in science. Another effective method of defusing such rumors is to openly endorse the supposedly discrediting information which they circulate. Thus some scientists loudly and ostensibly proclaim the value of doubt, stating that true science never produces 100 percent accurate results – implying that those who ask for such a degree of certainty are incompetent scientists. A third way to combat a rumor is to sue those who disseminate false news for defamation or intellectual fraud. Taking such steps would certainly invalidate one of the tenets of the strategy of doubt: exploiting the scientific ethos to join in the scientific debate and pervert it from within by systematically refusing to abide by its common rules. Everyone will agree, I hope, that the practice of doubt differs from the strategy of doubt. Yet some scientists continue to sustain a confusion between them. There are three ways to do so. The first is to fight tooth and nail against claims which are deemed to be absurd. The second is adopted by those scientists engaged in the defence of rationalist thought who are prone to struggle against what they anticipate as a threat: turning citizens into easy prey for obscurantism. The third way of confusing the practice with the strategy of doubt is to assert that only science offers the solutions able to secure a viable future for the human species. This stance is adopted by those scientists who refute a lay person’s right to criticize science – even when the disastrous technical developments science has brought about are pointed out. These three attitudes rest on the same fundamental mistake: presenting doubt as a feature which is external to scientific activity – though most scientists know that it is one of its key pillars. Is trust the antidote to doubt? One cannot tell whether doubt breeds ignorance or not unless ignorance has been clearly defined. But definitions are context-dependent (Pettit 1991). Hence, in the debate on the validity of scientific knowledge, ignorance mainly refers to the candid adherence of ordinary people to the false truths promoted by the detractors of scientific explanations. Such a definition 193 Albert Ogien is confusing, for no one would admit that blind assent to scientific facts is the opposite of ignorance. We unreflectively tend to believe that knowledge is the only remedy to ignorance. But this is not necessarily true. A common conviction asserts that to counteract a strategy of doubt, and win over the public to science, restoring trust in it might suffice. This is dubious, however, since trust is not something one can secure at will.2 To understand why this is so, let us turn to the ordinary grammar of trust. I will restrict my analysis to two major kinds of usage: trust as a state and trust as an action (Ogien 2006). Trust as a state The key property of a state is that it has a certain degree of stability. Accordingly, one can produce a description of its components, explain the conditions of its emergence and predict its ensuing conduct. Such a knowledge enables one to devise methods which pretend to elicit trust or restore it when it is held to be missing or failing, or to sell opinion polls that measure the level of citizens’ trust in the political and social institutions or of consumers’ trust in firms, products or brands. Though trust is often presented as a statistical reality, its definition remains notably wanting since it can be thought of either as an independent variable (trust is explained by external factors such as beliefs or legitimacy), or as a dependent variable (beliefs or legitimacy are explained by trust). For some researchers (Carlisle et al. 2010), both approaches are erroneous because trust must be seen as an endogenous variable since beliefs and opinions vary together according to the level of “expressed” trust (Blackburn 1998). Statistical data on trust in science can be dismissed on another ground. One has just to recall that commissioning and publishing a survey on this topic occurs chiefly when a scandal arises (contaminated blood, HIV, asbestos, avian flu, the Chernobyl or Fukushima accidents, largescale pollution, etc.) or when a question of general interest in which scientific knowledge is involved hits the headlines and gives rise to a political dispute. There is every reason to believe that the emotions raised and staged on these occasions significantly affect the answers given to pollsters. Hence, noticing that the level of trust in science is low when an affair, a fraud or a conflict of interest are disclosed and high in periods when nothing like that occurs cannot be considered as surprising. But in no case do these results measure the relation between ignorance and trust. Trust as action Let us turn now to the uses of the word “trust” as action. It generally refers to a commitment, which can be personal or impersonal. Let us start with the former. For Annette Baier (1986), trust is a social relationship in which one party agrees to rely on the willingness (or unwillingness) of the person he or she entrusts to achieve something he or she cares for. Personal trust can therefore be summed up by this formula: A (entrusts) B to perform a subset of X, and B receives a discretionary power to do so (or not). Engaging in a dynamic relationship of this sort implies solving a number of practical problems: how can one allow certain individuals to take care of things one finds important and why does one avoid doing so with some others; how can one concede to someone else the power to encroach on one’s life and possibly cause harm (while being confident that this will not be the case); and how can one exert control over what the entrusted will do to fulfil his/her commitment in one’s absence? As verification procedures about the moral attributes of the person to whom a discretionary power is given cannot last too long in real life, one has to rapidly give up any further inquiry (Williamson 1993; Hardin 2000; Quéré 2006). Hence the idea that entrusting is an action 194 Doubt, ignorance and trust which requires taking two rapid and simultaneous decisions: (1) granting to the entrusted person a limited discretionary power; and (2) suspending the search of any additional information that would reduce uncertainty about the successful completion of the transaction. In this way, trust entails a positive practice of ignorance, as one has to voluntarily accept being vulnerable to others while renouncing control to the person to whom delegation has been given. In other words, to trust is deliberately accepting epistemic asymmetry and its possible consequences (Harré 1999). It is this kind of asymmetry that Russell Hardin explores – but from a different perspective than Baier’s. For Hardin, trust is a rational decision resulting from a calculation that combines two elements: the advantage obtained by an individual relying on someone else and the knowledge that the individual has about the interest the entrusted one has in taking his or her interests to heart. Hence Hardin’s instruction: “Trust should not be conceived of from the standpoint of ego’s interests but from the standpoint of alter’s interests. It is encapsulated into ego’s judgment about the interests the entrusted might have to honor that trust” (Hardin 1993: 505). This is the meaning of his “encapsulated interest theory of trust.” How should one apply this theory to science and scientists? In Hardin’s thesis, trust necessarily involves direct contact between two human beings. So if a relationship of trust is to be established, two conditions must be met: the reliability of the person to whom the task of doing something is delegated and the certainty one has about the entrusted’s reliability. However, these two conditions can never be met in the case of such a remote relationship as the one laymen may entertain with scientists, let alone with science. According to Hardin, there is no reason to think that an ordinary citizen should know whether socially distant people like civil servants or members of any institution do really have his or her interests at heart. It is quite the opposite, for practical as well as for personal reasons. First, the tasks that these agents are ordered to enact are defined by the institution which employs them and are implemented only at the service of its goals; second, these tasks are performed in order to gain an advantage in terms of advancement or promotion. Hardin concludes that the interests these agents abide by are first and foremost their own. Hardin goes even further and claims that the many “arrangements of trust” (Karpik 1996) set up to control the work of an institution’s agents (evaluations, publications, regulations, controversies, etc.) fail to ensure the impartiality and honesty of these professionals or to establish the rationality of their decisions. Ordinary citizens are at best able to assume that these arrangements – when they are aware of them, which is seldom the case – actually do enforce the control function assigned to them. Hardin’s conclusion is quite straightforward: those who assert they “trust” institutions – as they do in surveys – pronounce a judgment which only reports a conviction that everything is done to guarantee that science and scientists fulfil their missions appropriately. Thus, social distance ensures that ignorance does not generate doubt. As Hardin contends, applying the word trust to the impersonal type of relationship that links an individual to a social institution is controversial. This is so because a necessary condition for the establishment of a trust relationship is missing: the party to whom delegation is made must either have full freedom to do what it is committed to do or not. This condition is not met by institutions or artifacts. Any man-made object is created to achieve an end. Only writers of children stories, science fiction and horror novels are able to make believe that these creations can suddenly come to life and decide to act the way they fancy. In the ordinary world, we generally do not attribute such a freedom to a car, a computer, an equation, software, a train or a plane. We simply expect institutions and artifacts to enact the task for which they have been designed and when they fail to do so, we think they are broken or that they no longer function properly. It is only in a metaphorical way that we allow ourselves to attribute to artifacts and institutions a 195 Albert Ogien property that we usually reserve for humans: intentionality. And where intentionality is absent, it is simply impossible to speak of trust. Is it reasonable to ascribe intentionality to science? This is only possible when science is viewed as an institution pursuing a single ambition served by agents who systematically behave according to its prescriptive rules. I wonder whether anyone would defend such a totalitarian view of institutions today. Another difference between personal and impersonal trust is that people are now caught in a society which is by and large structured by scientific activity. They live in a legal and material world defined in many mays by what science and technique impose as useful or mandatory. In short, science is an institution and a power (Barber 1987), and one cannot speak of trust between two parties when one of them stands in a subordinate position to the other. Hence I surmise that confusing epistemic asymmetry with ignorance is another fundamental mistake. To clarify this point, let us turn once again to ordinary language. The grammatical limits of the criticism of science The concept of science, as it is generally used in ordinary language, is directly associated to the notion of truth. Therefore, “scientific facts” are antinomian to superstition and obscurantism and oppose the dogmas of religion. This ordinary use is sometimes reinforced by invoking quarrels dating back to the Enlightenment: reason against ignorance as it breeds subservience and alienation. In short, the concept of science contains within itself the permanent reminder of an ideal of rationality and emancipation. Ordinary uses show that these attributes of science are firmly established in the common lexicon. Few – apart from the militants of organizations that have turned the thwarting of science’s authority into a useful rhetorical weapon – contest science’s worth and usefulness. On the one hand, because the progress associated with science is entirely obvious to the majority of the inhabitants of the planet; on the other hand, because scientific data are essential to the workings of a multitude of realms of activity in modern societies. And yet, it is also true that science is intricately entangled in the implementation of bio-political decisions or in the pursuit of major economic interests (Habermas 1978). This interweaving may, at times, lead to social protest. But one can contend that these disputes are not an indictment of the ability to use scientific methods to discover and present “scientific truths.” The criticism levelled against certain forms of scientific activity that are clearly serving questionable purposes is far from being an obscurantist or irrational endeavor. A primary aim of such criticism is to question the ways that appeals to scientific authority is used to legitimate often hazardous decisions taken by commercial entities. In this sense, rebuffing these criticisms in the name of science is an overreaction. One should admit instead that the rightful objections people raise against what they see as scientific misuses do not question science or the usefulness of the advances it offers to humanity. In other words, depicting occasioned and justified doubts as an attack against truth and reason amounts to committing a category mistake (Wynne 1992). Wouldn’t it be simpler to accept that these criticisms only challenge the appropriateness of a political decision which many informed citizens see as contemptuous of ethical principles or commonly accepted obligations and expectations of democratic life (Wagner 2007)? Another element of agnotology consists in contending that the dismissal of science is supported by public opinion because people have a poor level of education and training.3 Many surveys have clearly demonstrated though that the only factor explaining denigration and rejection of science is commitment to a conservative party (Gauchat 2012; Hmielowski et al. 2013). This can hardly come as a surprise. We are aware of what these conservative groups fiercely advocate (creationism, free exploitation of natural resources). Sometimes, a third type 196 Doubt, ignorance and trust of criticism is also heard: it condemns the ravages of progress and champions a halt to economic growth. But even if this third voice denounces the trappings of science, it seldom questions the rigor and utility of science itself since, most of the time, its criticisms are based on scientific data. Thus, claiming that public opinion is largely formatted by reactionary movements’ propaganda amounts to committing another fundamental mistake, for ordinary citizens in modern societies do not necessarily share all the prejudices frightened scientists suppose they have about science. Common sense tends to associate scientific activity with this obscure, repetitive and constantly peer-reviewed work in which self-sacrifice, selflessness and spirit of discovery determine the conduct of those who perform it. And ‘science’ still refers to an institution which brings together people (who usually wear white coats) working to improve knowledge and technology for the benefit of humankind. The institution and its agents may experience slippages (atomic bomb, pollution, chemical waste, genetic mutations, climate change), but it is accustomed to regularly digest and correct them without its vocation being truly affected. I contend that no one can seriously believe that doubts distilled by some scientists or industry groups could be a real threat to science. Or that questioning partial results produced in a specific area of research could lead people to think that science and scientists are useless or dangerous. A last fundamental mistake consists in publicly contending that such opportunistic challenges to scientific activity are attacks which may potentially jeopardize the principles of rationality attached to science, the benefits of knowledge and the ideal of emancipation and progress it bears. Trust in science is not just a matter of opinion that can be cleared up by showing that 63 percent of the population is confident in what scientists do. When conceived of in terms of a relationship between ordinary citizens and the institutions that govern them, trust sums up under a single word a situation in which an expedient balance of power is actually observed. As far as science is concerned, preserving this relationship would require enhancing citizen’s control over the technical implementation of scientific research or developing the legal means available to voice their criticisms. It would also depend on their capability to negotiate the limits of the discretionary power granted to scientists to set the pace of progress or to decide the level of impunity to be afforded to those of them who knowingly disseminate false information. Conclusion Doubt should not be mistaken for the outcome of propaganda as it is a constitutive feature of scientific activity. Ignorance should not be seen as necessarily inducing irrational outcomes since one may act rationally without knowing that one does so. Trust should not be taken as an antidote to doubt in science simply because our ordinary conception of science cannot be jeopardized by casual (and legitimate) criticisms of some of its spurious or endangering developments. There is an entrenched belief that people’s ignorance cannot but bear fatal consequences – in political, moral or scientific terms. I have given some reasons why these fears are mostly unwarranted. Such a claim should not be brushed aside as naïve. It is founded on an analysis demonstrating that it is useless to think that citizens’ opposition to powers and institutions can be fought against simply by campaigns designed to restore trust (in either the government or science). In fact, in a democratic regime the best way to take into consideration people’s defiance would be to renounce the temptation to keep citizens away from decisions that concern them. An efficient antidote to the putative effects attributed to people’s ignorance would then consist of reducing as much as possible “epistemic injustice.”4 This is a task that could easily be dealt with through handing over exhaustive information to the citizens and organizing as much public debate as possible to allow a larger sharing of responsibility for any decision taken in the name of society’s common good. 197 Albert Ogien Notes 1 This epistemological skepticism must be distinguished from Peirce’s conception of doubt as starting point for inquiry. See Chauviré (1995) and Meyers (1967). 2 O’Neill (2002) recalls that the more one endeavors to be trusted, the more one elicits defiance. 3 An historical analysis is offered by Bensaude-Vincent (2013). See also Roberts & Reid, 2011. 4 To use the notion introduced by Fricker (2006). References Baier, A. (1986) “Trust and Anti-trust”, Ethics, 96(2): 231-260. Barber, B. (1987) “Trust in Science”, Minerva, 25(1/2): 123-134. Bensaude-Vincent, B. (2013) L’opinion publique et la science, Paris, La Découverte. Blackburn, S. (1998) “Trust, Cooperation and Human Psychology”, in V. Braithwaite & M. Levi (eds.), Trust and Governance, New York, Russell Sage Foundation. Carlisle, J. E., Feezell, J. T., Michaud, K., Smith, E. R. & Smith, L. (2010) “The Public’s Trust in Scientific Claims Regarding Offshore Oil Drilling”, Public Understanding of Science, 19(5): 514-527. Chalmers, A. (1987) Qu’est-ce que la Science? Paris, La Découverte. Chauviré, C. (1995) Peirce et la signification, Paris, PUF. Fricker, M. (2006) Epistemic Injustice, Oxford, Oxford University Press. Gauchat, G. (2012) “Politicization of Science: A Study of Public Trust in the United States 1974 to 2010, American Sociological Review, 77(2): 167-187. Granger, G. G. (1993) La Science et les sciences, Paris, PUF (Que sais-je?). Habermas, J. (1978) La technique et la science comme idéologie, Paris, Denoel-Gonthier. Hmielowski, J., Feldman, L., Myers, T., Leiserowitz, A. & Maibach, E. (2013) “An Attack on Science? Media Use, Trust in Scientists and Perceptions of Global Warming”, Public Understanding of Science [DOI: 10.1177/0963662513480091]. Hardin, R. (1993) “The Street-Level Epistemology of Trust”, Politics and Society, 21(4): 505-529. Hardin, R. (2000) “Conceptions and Explanations of Trust”, in D. Gambetta (ed.), Trust in Society, New York, Russel Sage Foundation. Harré, R. (1999) “Trust and Its Surrogates: Psychological Foundations of Political Process”, in M. E. Warren (ed.), Democracy and Trust, Cambridge, Cambridge University Press. Kapferer, J. N. (2010) Rumeurs: le plus vieux média du monde, Paris, Le Seuil (Points). Karpik, L. (1996) “Dispositifs de confiance et engagements crédibles”, Sociologie du travail, 38(4): 527–550. Meyers, R. G. (1967) “Peirce on Cartesian Doubt”, Transactions of the Charles S. Peirce Society, 3: 13–23. Ogien, A. (2006) “Eléments pour une grammaire de la confiance ”, in A. Ogien & L. Quéré (éds.), La confiance et ses moments, Paris, Economica. O’Neill, O. (2002) : A Question of Trust, Cambridge, Cambridge University Press. Pettit, P. (1991) “Realism and Response-dependence”, Mind, 100: 587–626. Quéré, L. (2006) “La confiance comme engagement”, in L. Quéré & A. Ogien (éds.),La confiance et ses moments, Paris, Economica. Roberts, M. R. & Reid, G. (2011) “Causal or Spurious? The Relationships of Knowledge and Attitudes to Trust in Science and Technology”, Public Understanding of Science, 22(5): 624-641. Wagner, W. (2007) “Vernacular Science Knowledge: Its Role in Everyday Life Communication”, Public Understanding of Science, 16: 7-22. Wilholt, T. (2013) “Epistemic Trust in Science”, British Journal for the Philosophy of Science, 64(2): 233-253. Williamson, O. E. (1993) “Calculativeness, Trust and Economic Organization”, Journal of Law and Economics, 36: 453–486. Wittgenstein, L. (1969) On Certainty, Oxford, Blackwell. Wynne, B. (1992) “Misunderstood Misunderstanding: Social Identities and Public Uptake of Science”, Public Understanding of Science, 1(3): 281-304. 198