Skip to main content
Billy Wheeler

Billy Wheeler

Read Online: https://rdcu.be/cZGc1 Informational structural realism (ISR) offers a new way to understand the nature of the “structure” that structural realists claim our best scientific theories get right about the world. According to... more
Read Online: https://rdcu.be/cZGc1

Informational structural realism (ISR) offers a new way to understand the nature of the “structure” that structural realists claim our best scientific theories get right about the world. According to Luciano Floridi, who has given the most detailed formulation of ISR so far, this structure is composed of information representing binary differences. In this paper I assess whether ISR offers a good way to resolve the tension between the no miracle argument (often taken to support scientific realism) and the pessimistic meta-induction (often taken to support antirealism). With regards to this important motivation for structural realism, I shall argue that ISR faces insurmountable difficulties. However, I agree that interpreting “structure” in terms of information can be profitable for the realist. Instead, I offer a new version of ISR that borrows from algorithmic information theory. As a result, a more realist version of ISR is provided.

Read Online: https://rdcu.be/cZGc1
Read the article for free at Springer with the link:
https://rdcu.be/cQ8Eo
Objects appear to causally interact with one another in virtual worlds, such as video games, virtual reality, and training simulations. Is this causation real or is it illusory? In this paper I argue that virtual causation is as real as... more
Objects appear to causally interact with one another in virtual worlds, such as video games, virtual reality, and training simulations. Is this causation real or is it illusory? In this paper I argue that virtual causation is as real as physical causation. I achieve this in two steps: firstly, I show how virtual causation has all the important hallmarks of relations that are causal, as opposed to merely accidental, and secondly, I show how virtual causation is genuine according to one influential metaphysical theory of causation: the mechanistic approach.

Please click the link to read online: https://rdcu.be/cGoe5
Machines, instruments and tools offer many benefits, but according to the classical Daoists-Laozi and Zhuangzi-they can also interfere with living a life in harmony with nature. Despite this, the Zhuangzi offers numerous stories of... more
Machines, instruments and tools offer many benefits, but according to the classical Daoists-Laozi and Zhuangzi-they can also interfere with living a life in harmony with nature. Despite this, the Zhuangzi offers numerous stories of individuals who use technologies whilst exemplifying the virtues of a sage, although how this is achieved is not well understood. I examine two recent interpretations and argue that they are problematic on both philosophical and interpretative grounds. In their place I offer a new solution based on comparing Zhuangzi with recent studies of the effects of the internet on the way we think.
That many of our most successful scientific theories involve one or more idealizations poses a challenge to traditional accounts of theory confirmation. One popular response amongst scientific realists is the "Improvement Model of... more
That many of our most successful scientific theories involve one or more idealizations poses a challenge to traditional accounts of theory confirmation. One popular response amongst scientific realists is the "Improvement Model of Confirmation": if tightening up one or more of the idealizations leads to greater predictive accuracy, then this supports the belief that the theory's inaccuracy is a result of its idealizations and not because it is wrong. In this article I argue that the improvement model is deeply flawed and that therefore idealizations continue to undermine "success-to-truth" arguments for scientific realism.
Humans are becoming increasingly dependent on the 'say-so' of machines, such as computers, smartphones, and robots. In epistemology, knowledge based on what you have been told is called 'testimony' and being able to give and receive... more
Humans are becoming increasingly dependent on the 'say-so' of machines, such as computers, smartphones, and robots. In epistemology, knowledge based on what you have been told is called 'testimony' and being able to give and receive testimony is a prerequisite for engaging in many social roles. Should robots and other autonomous intelligent machines be considered epistemic testifiers akin to those of humans? This chapter attempts to answer this question as well as explore the implications of robot testimony for the criminal justice system. Few are in agreement as to the 'types' of agents that can provide testimony. The chapter surveys three well-known approaches and shows that on two of these approaches being able to provide testimony is bound up with the possession of intentional mental states. Through a discussion of computational and folk-psychological approaches to intentionality, it is argued that a good case can be made for robots fulfilling all three definitions.
We are becoming increasingly dependent on robots and other forms of artificial intelligence for our beliefs. But how should the knowledge gained from the "say-so" of a robot be classified? Should it be understood as testimonial knowledge,... more
We are becoming increasingly dependent on robots and other forms of artificial intelligence for our beliefs. But how should the knowledge gained from the "say-so" of a robot be classified? Should it be understood as testimonial knowledge, similar to knowledge gained in conversation with another person? Or should it be understood as a form of instrument-based knowledge, such as that gained from a calculator or a sundial? There is more at stake here than terminology, for how we treat objects as sources of knowledge often has important social and legal consequences. In this paper, I argue that at least some robots are capable of testimony. I make my argument by exploring the differences between instruments and testifiers on a well-known account of knowledge: reliabilism. On this approach, I claim that the difference between instruments and testifiers as sources of knowledge is that only the latter are capable of deception. As some robots can be designed to deceive, so they too should be recognized as testimonial sources of knowledge.
Is it possible to gain knowledge about the real world based solely on experiences in virtual reality? According to one influential theory of knowledge, you cannot. Robert Nozick's truth-tracking theory requires that, in addition to a... more
Is it possible to gain knowledge about the real world based solely on experiences in virtual reality? According to one influential theory of knowledge, you cannot. Robert Nozick's truth-tracking theory requires that, in addition to a belief being true, it must also be sensitive to the truth. Yet beliefs formed in virtual reality are not sensitive: in the nearest possible world where P is false, you would have continued to believe that P. This is problematic because there is increasing awareness from philosophers and technologists that virtual reality is an important way in which we can arrive at beliefs and knowledge about the world. Here I argue that a suitably modified version of Nozick's sensitivity condition is able to account for knowledge from virtual reality.
The algorithmic theory of laws claims that the laws of nature are the algorithms in the best possible compression of all empirical data. This position assumes that the universe is compressible and that data received from observing it is... more
The algorithmic theory of laws claims that the laws of nature are the algorithms in the best possible compression of all empirical data. This position assumes that the universe is compressible and that data received from observing it is easily reproducible using a simple set of rules. However, there are three sources of evidence that suggest that the universe as a whole is incompressible. The first comes from the practice of science. The other two come from the nature of the universe itself: the presence of chaotic behavior and the nature of quantum systems also suggests that the universe is incompressible. This paper evaluates these sources and argues that none provides a convincing case to reject the algorithmic theory of laws.
Computer models and simulations have provided enormous benefits to researchers in the natural and social sciences, as well as many areas of philosophy. However, to date, there has been little attempt to use computer models in the... more
Computer models and simulations have provided enormous benefits to researchers in the natural and social sciences, as well as many areas of philosophy. However, to date, there has been little attempt to use computer models in the development and evaluation of metaphysical theories. This is a shame, as there are good reasons for believing that metaphysics could benefit just as much from this practice as other disciplines. In this paper I assess the possibilities and limitations of using computer models in metaphysics. I outline the way in which different kinds of model could be useful for different areas of metaphysics, and I illustrate in more detail how agent-based models specifically could be used to model two well-known theories of laws: David Lewis's "Best System Account" and David Armstrong's "Nomic Necessitation" view. Some logically possible processes cannot be simulated on a standard computing device. I finish by assessing how much of a threat this is to the prospect of metaphysical modeling in general.
It has been argued that the use of big data in scientific research obviates the need for causal knowledge in making sound predictions and interventions. Whilst few accept that this claim is true, there is an ongoing discussion about what... more
It has been argued that the use of big data in scientific research obviates the need for causal knowledge in making sound predictions and interventions. Whilst few accept that this claim is true, there is an ongoing discussion about what effect, if any, big data has on scientific methodology, and in particular, the search for causes. One response has been to show that the automated analysis of big data by a computer program can be used to find causes in addition to mere correlations. However, up until now it has only been demonstrated how this can be achieved with respect to difference-making causes. Yet it is widely acknowledged that scientists need evidence of both “difference-making” and “production” in order to infer a genuine causal link. This paper fills in the gap by outlining how computer-assisted discovery in big data can find productive causes. This is achieved by developing an inference rule based on a little-known causal process theory: the information transmission account.
It has been argued that the fundamental laws of physics do not face a 'problem of provisos' equivalent to that found in other scientific disciplines (Earman, Roberts and Smith 2002) and there is only the appearance of exceptions to... more
It has been argued that the fundamental laws of physics do not face a 'problem of provisos' equivalent to that found in other scientific disciplines (Earman, Roberts and Smith 2002) and there is only the appearance of exceptions to physical laws if they are confused with differential equations of evolution type (Smith 2002). In this paper I argue that even if this is true, fundamental laws in physics still pose a major challenge to standard Humean approaches to lawhood, as they are not in any obvious sense about regularities in behaviour. A Humean approach to physical laws with exceptions is possible, however, if we adopt a view of laws that takes them to be the algorithms in the algorithmic compressions of empirical data. When this is supplemented with a distinction between lossy and lossless compression, we can explain exceptions in terms of compression artefacts present in the application of the lossy laws.
It is often said that the best system account of laws (BSA) needs supplementing with a theory of perfectly natural properties. The 'strength' and 'simplicity' of a system is language-relative and without a fixed vocabulary it is... more
It is often said that the best system account of laws (BSA) needs supplementing with a theory of perfectly natural properties. The 'strength' and 'simplicity' of a system is language-relative and without a fixed vocabulary it is impossible to compare rival systems. Recently a number of philosophers have attempted to reformulate the BSA in an effort to avoid commitment to natural properties. I assess these proposals and argue that they are problematic as they stand. Nonetheless, I agree with their aim, and show that if simplicity is interpreted as 'compression', algorithmic information theory provides a framework for system comparison without the need for natural properties. Keywords: laws of nature; best system account; natural properties; algorithmic information theory; invariance theorem. RESUMEN: A menudo se dice que la explicación de las leyes del mejor sistema (BSA) requiere ser completada con una teoría de las propiedades perfectamente naturales. La 'fuerza' y la 'simplicidad' de un sistema son relativas a un lenguaje y sin un vocabulario fijo es imposible comparar sistemas rivales. Recientemente, varios filósofos han in-tentado reformular la BSA en un esfuerzo por evitar el compromiso con las propiedades naturales. Aquí valoro estas propuestas y argumento que son problemáticas en su forma actual. Sin embargo, comparto su objetivo y muestro que si la simplicidad es interpretada como 'compresión', la teoría algorítmica de la información pro-porciona un marco para la comparación sin necesidad de apelar a propiedades naturales. Palabras clave: Leyes de la naturaleza, explicación del mejor sistema, propiedades naturales, teoría algorítmica de la infor-mación, teorema de invariancia.
This new study provides a refreshing look at the issue of exceptions and shows that much of the problem stems from a failure to recognize at least two kinds of exception-ridden law: ceteris paribus laws and ideal laws. Billy Wheeler... more
This new study provides a refreshing look at the issue of exceptions and shows that much of the problem stems from a failure to recognize at least two kinds of exception-ridden law: ceteris paribus laws and ideal laws. Billy Wheeler offers the first book-length discussion of ideal laws. The key difference between these two kinds of laws concerns the nature of the conditions that need to be satisfied and their epistemological role in the law's formulation and discovery. He presents a Humean-inspired approach that draws heavily on concepts from the information and computing sciences. Specifically, Wheeler argues that laws are best seen as algorithms for compressing empirical data and that ideal laws are needed as 'lossy compressors' for complex data.

Major figures in the metaphysics of science receive special attention such as Ronald Giere, Bas van Fraassen, Nancy Cartwright, David Lewis and Marc Lange. This book is essential reading for philosophers of science and will interest metaphysicians, epistemologists and others interested in applying concepts from computing to traditional philosophical problems.

*pdf of book cover only
Book available to purchase at: https://www.springer.com/gp/book/9783319995632