The Methods of Science
The Methods of Science
For many people today the challenge to religious belief arises not from anu conflict of
content between science and religion but from the assumption that the scientific method is the
only road to knowledge. Thus the concern for methodological issues, found among both
scientists an theologians, has far-reaching implications for the outlook of modern man.
Examination of present-day interpretations of the methods of science will provide a basis for
comparison in subsequent chapters with the methods of religion, about which there has also
been significant recent thought. Illuminating similarities as well as striking differences in the
epistemological approaches of the two fields will be evident. But any such comparison must
rest on a clear understanding of the character of the scientific enterprise itself, to which we
devote this chapter. We will then be in a better position to assess the strengths and limitations
of science, and the roles of the subject (knower) and the object (known) in scientific
knowledge.
At the outset it should be stated that there is no “scientific method,” no formula with five
easy steps guaranteed to lead discoveries. There are many methods, used at different stages of
inquiry, in widely varying circumstances. The clear, systematic schemes of the logicians or of
the science teacher’s lectures may be far removed from the ad hoc procedures and circuitous
adventures of the man on the frontier of research. But we can at least note certain broad
features characteristic of scientific thought. Since the author is a physicist, illustrations will
be drawn largely from his own field.
In the work of Galileo, Newton, and Darwin we have seen the distinctive combination of
experimental and interpretive elements. The experimental component is comprised of
observations and data, the products of the experimental side of science. The interpretive
component includes the concepts, laws, and theories that constitute its theoretical side. A
highly idealized procedure would start with observations, from which tentative hypotheses
would be formulated, whose implications could be tested experimentally. These experiments
would lead to the construction of a more complete theory, which in turn would suggest new
experiments resulting in modifications and extensions of the theory. In practice, however, the
two components cannot be so clearly separated nor the logical steps so neatly distinguished.
For one thing, there are no uninterpreted facts. Even in the act of perception itself, the
irreducible “data” given are not, as Hume claimed, isolated patches of colour or other
fragmentary sensations, but total patterns in which interpretation has already entered. we
organize our experience in the light of particular interests, and we attend to selected features.
So, too, scientific activity never consists in simply “collecting all the facts”; significant
experimentation requires a selection of relevant variables and a purposeful experimental
design dependent on the questions that are considered fruitful and the problems that have
been formulated. “Observations” are always abstractions from our total experience, and they
are expressed in terms of conceptual structures. The processes of measurement, as well as the
language in which result are reported, are influenced by prior theories. Each stage of
investigation presuppose many principles that for the moment are taken for granted. Thus all
“data” are, as Hanson puts it, already “theory-laden.”1
1
Norwood R. Hanson, Patterns of discovery (Cambridge: Cambridge University Press, 1958), Chap. 1.
Though the data of science are never “bare facts,” they are always based on data of the
public world. In some cases they may be obtained from observation and description, and in
others from controlled experimentation and exact quantitative measurement. They are
“publicly verifiable”-not because “anyone” could verify them, but because they represent the
common experience of the scientific community at a given time. For there is always an
interpretative component present. A doctor sees an X-ray plate differently from someone
without medical training. Galileo saw a pendulum as an object with inertia which almost
repeats its oscillating motion, whereas his predecessors had seen it as a constrained falling
object which slowly attains its final state of rest. The line between “observation” and
“theory,” then, is not sharp; the distinction is pragmatic and shifts with the advance of science
and with differing immediate purposes in inquiry.
The theoretical component of science consists of laws and theories whose separate terms
we will call concepts. “Mass,” “acceleration,” and “pressure” are not directly observable, and
they are not given to us by nature. The are mental constructs used to interpret observations,
they are symbols that help us to organize experience. The links between theoretical concepts
ad experimental observations have been termed “rules of correspondence,” 2 “epistemic
correlations,”3 or “coordinating definitions.”4 For some concepts these rules of
correspondence may be very direct and simple, as for example the association of “length”
with the result of a particular measuring operation. For other concepts, such as “energy” or
“neutron,” rules of correspondence may be more complex. For some concepts, such as the
“wave-function” of quantum mechanics, which in turn correspond to observable events.
Laws are correlations between two or more concepts that are closely related to
observables. They represent the systematic ordering of experience, the attempt to describe
observations in terms of regular patterns. These may be put in the form of graphs, equations,
or verbal expressions of interrelations between concepts, and they have varying degrees of
generality and abstraction. Kepler’s Laws of planetary motion and Galileo’s equations of
motion relating time, distance, and acceleration may be considered prototypes of such laws.
Another example os Boyle’s Law, which states that for a given quantity of gas (such as the
air trapped in a bicycle tire pump) the pressure is inversely proportional to the volume (for
example, if the volume is reduced by a factor of 2, the pressure doubles). Associated with
2
Henry Margenau, The Nature of Physical Reality (New York: McGraw-Hill Book Co., 1950; PB); Ernest
Nagel, The Structure of Science (New York; Harcourt, Brace and World, 1961).
3
Filmer S. C. Northrop, The Logic of the Sciences and Humanities (New York: The Macmillan Co., 1947;
Meridian PB)
4
Hans Reichenbach, The Rise of Scientific Philosophy (Berkeley: University of California Press, 1951; PB).
laws are statements about their limiting conditions and scope (in Boyle’s Law, the
temperature must be constant and the pressure not so great tat the gas is near liquefaction).
Laws may not imply relationships which could be spoken of as causal. Many laws
(including the example above) express concurrent variation of functional dependence with no
implication that changes in one variable are the “cause: of changes in another. Some laws are
statistical in character. Because laws are correlations between concepts which are closely
associated with observables, they are often called “experimental laws”; but it should be
remembered that they always go beyond the experimental data. A law formulates a universal
relation, which allows the derivation of values not given in the original data, though it is
based in observation As Nagel states:
None of the customary examples of experimental laws are in fact about sense data, since
they employ notions and involve assumptions that go far beyond anything directly given
to sense. …Reports of what are commonly regarded as experimental observations are
frequently couched in the language of what is admittedly some theory.5
Finally, theories are unified and generalized conceptual schemes from which laws can be
derived. Compared to laws, theories are further from direct observation and are more
comprehensive, connecting greater ranges of phenomena with higher generality. Because
coherent structures of concepts usually involve new ways of looking at phenomena, their
development reflects greater creativity and originality. A theory is constructed from it, but it
is never simply a restatement of those laws, and often a theory leads to the discovery of new
laws. Thus from Newton’s theory of gravitation, Kepler’s Laws could be deducted, but
former has much greater generality since it applied also to the mood and to objects on the
earth. In order to account for Boyle’s Law and other laws relating the pressure, volume,
temperature, and combining ratios of gases, the kinetic theory was later developed, in which
gases were assumed to consist of colliding elastic particles (the so-called “billiard-ball
model”). But the kinetic theory also accounted for other laws and led to unanticipated
discoveries concerning viscosity, diffusion, heat conduction, and so forth. Among theories of
great generality which we shall describe later are the quantum theory and the theory of
evolution.
5
Nagel, The Structure of Science, pp. 81-82. Used by permission of Harcourt, Brace and World, Inc., and
Routledge & Kegan Paul, Ltd.
How are theories formed? The inductive ideal, for which Bacon, Hume, and Mill were
spokesmen, depicts science as generalizing from particular experimental sequences to
universal patterns. Recurrent uniformities in oft-repeated experiments, followed by “simple
enumeration” and comparison (for example. “concomitant variation”), are supposed to lead
directly to general laws. We will neglect for the moment the problem of how such
generalizations can be justified as a basis for predicting the future, and whether induction
depends on philosophical assumptions about “the uniformity of nature.” Viewed simply as a
description of what scientists do, this account seems unsatisfactory. The mere amassing of
data or cataloguing of facts does not produce a scientific theory. But new concepts and
abstract interpretive constructions do enable us to see coherent patterns of relationship among
the data. Often the introduction of new assumptions, idealizations (“frictionless planes”) or
concepts (Galileo’s “acceleration”) permit novel ways of representing phenomena.
Theoretical terms are mental constructs which may be suggested by the data but are never
given to us directly by nature. They have a status logically different from that of the data, and
hence offer a type of examination that no mere summary of the data could achieve. The
empiricist tradition has never adequately represented the role of concepts and theories in
science.
The deductive ideal6 emphasizes the process of reasoning in the opposite direction,
namely the derivation of verifiable observation statements from general theories (taken with
rules of correspondence). This approach has the virtue of recognizing the difference in logical
status between theories and observations, which is overlooked in the inductive approach. The
deductive pattern is, as we shall see, a plausible portrayal of the way theories are tested, but it
throws little light on the process with, which the inductive pattern at least trios to deal: the
initial formation of a theory. As Hanson says: “Physicist do not start from hypotheses; the
start from data. By the time a law has been fixed into a hypothetical-deductive system, reality
original physical thinking is over.”7
Although inductive and deductive ideals accurately portray certain aspects of scientific
activity, they omit from their accounts the leap of creative imagination. There is a logic for
testing theories but no logic for creating them; there are no recipes for making original
discoveries. Even attempts to identify scientific creativity in terms of specific abilities or
6
See Karl R. Popper, The Logic of Scientific Discovery (New York: Basic Books, 1959; Science Editions PB);
Richard B. Braithwaite, Scientific Explanation (Cambridge: Cambridge University Press, 1953; Harper PB).
7
Hanson, Patterns of Discovery, p. 70.
character traits have had limited success.8 But one can at least look at important discoveries
in the past, though their circumstances were highly diverse. Many creative ideas have
occurred unexpectedly in an intuitive flash, as in the case when Archimedes shouted
“Eureka” in his bath. Darwin had read Malthus on human population pressure, but was
preoccupied with other things when it suddenly struck him that a similar concept would
provide the key to evolution; the idea of natural selection was born. “I can remember,” he
recalls, “the very spot in the road, whilst in my carriage, when to my joy the solution
occurred to me.” Poincaré’s classic essay describes how several crucial ideas came to him
“spontaneously” during periods of relaxation when he had temporarily abandoned a
problem.9 We must remember that for each of these men there had been long periods of
previous preparation, discipline, and reflection on the problem; and of course such sudden
inspirations must subsequently be tested, since many “flashes of insight” turn out to be
wrong. But the actual origin of the novel idea in these instances was sudden and unexpected,
and appears to have been the product of the subconscious mind-in which there is a
remarkable fluidity of image combinations and freedom to break from established schemes.
New theories have often arisen from novel combinations of ideas previously entertained
in isolation. Koestler and Ghiselin10 suggest that creative imagination in both science and
literature is frequently associated with the interplay between two conceptual frameworks. It
involves the synthesis of a new whole, the reordering of old elements into a fresh
configuration. Often it arises from the perception of an analogy between apparently unrelated
situations. Newton connected two very familiar facts-the fall of an apple and the evolution of
the moon. Darwin saw an analogy between population pressure and the survival animal
species. We will analyse to the next section the systematic function of analogies and models
of science. Here we point out the parallel between scientific and artistic creativity. A
metaphor in poetry arises from a new connection between previously separated areas of
experience, a “transaction between two contexts” in which one element influences the way a
second is seen.11 In the work of both artist and scientist, Bronowski suggests, there is an
8
See, e.g., the papers and extensive bibliography in C. W. Taylor and F. Barron, eds., Scientific Creativity: Its
Recognition and Development (New York: John Wiley and Sons, 1963).
9
Henri Poincaré, “Mathematical Creation,” in his Foundations of Science, trans. G. Bruce Halsted (New York:
The Science Press, 1913). See also Jacques Hadamard, Essay on the Psychology of Invention in the
Mathematical Field (Princeton, N.J.: Princeton University Press, 1945; Dover PB); W.I.B. Beveridge, The Art
of Scientific Investigation (New York: W. W. Norton & Co., 1950; Modern Library PB), Chap. 6.
10
Arthur Koestler, The Act of Creation (New York: The Macmillan Co., 1964); Brewster Ghiselin, ed., The
Creative Process (Berkeley: University of California Press, 1952; Mentor PB).
11
Jerome Bruner, On Knowing: Essays for the Left Hand (Cambridge: Harvard University Press, 1963;
Athenaeum PB); Max Black, Models and Metaphors (Ithaca, N.Y.: Cornell University Press, 1962).
aesthetic delight in the coherence of form and structure in experience, and an enjoyment of
pattern in diversity.12 Campbell has written:
For it has been admitted that though discovery of laws depends ultimately not on fixed
rules but on the imagination of highly gifted individuals, this imaginative and personal
element is much more prominent in the development of theories; the neglect of theories
leads directly to the neglect of the imaginative and personal element in science. It leads to
an utterly false contrast between “materialistic” science and the “humanistic” studies of
literature, history, and art. … What I want to impress on the reader is how purely personal
was Newton’s idea. His theory of universal gravitation, suggested to him by the trivial fall
of an apple, was a product of his individual mind, just as much as the Fifth Symphony
(said to have been suggested by another trivial incident, the knocking at a door) was a
product of Beethoven’s.13
The diversity of mental operations in scientific inquiry thus cannot be reduced to any
single ideal type. In the derivation of simple empirical laws, induction predominates, but even
here the scientist does more than merely summarize the data in the formation of new theories,
creative imagination transcends any process of strictly logical reasoning. In the testing
theories, deduction is prominent; but in place of any simple “empirical verification” we will
defend the relevance of a variety of criteria.
There are three criteria by which a theory may be evaluated: its agreement with
observations, the internal relations among its concepts, and its comprehensiveness. The first
criterion is relation to data that are reproducible within the scientific community. Empirical
14
agreement is a crucial property of any acceptable theory. Toulmin refers to a theory as an
“inference ticket,” a technique for inferring observable relationships, which can then be
tested. From a theory alone it is possible to deduce laws; from laws plus given initial
conditions (together with rules of correspondence) it is possible to deduce relations among
observables, which can be compared to data obtained in the past or expected in the future. For
example, from the laws of planetary motion plus data about the present positions of sun and
moon one can calculate the expected time of the next eclipse, and the prediction can then be
checked by observation.
12
J. Bronowski, Science and Human Values (New York: Julian Messner, Inc., 1956; Harper PB), Chap. 1.
13
Norman Campbell, What is Science? (London: Methuen & Co., 1921; Dover PB), pp. 97, 102.
14
Stephen Toulmin, The philosophy of Science (London: Hutchison University Library, 1953; Harper PB).
The second criterion refers to the relations among theoretical concepts. Consistency and
coherence mean respectively the absence of logical contradictions and the presence of what
Margenau calls “multiple connections” among concepts within the internal structure of a
particular theory, or with those of other theories believed to be valid. Simplicity signifies the
smallest number of independent assumptions (for example, the Copernican theory was
simpler than the Ptolemaic in requiring fewer assumptions which were ad hoc—that is, not
derivable from the fundamental structure of the theory). But simplicity has other nuances
which are notoriously difficult to define; Cohen and Nagel say it includes Yan incalculable
aesthetic element,"15 and many scientists speak of the “elegance” of a theory. Coherence,
order, symmetry, and simplicity of formal structure are sought. In the origins of Einstein’s
theory of relativity, new experiments (including those of Michelson and Morley) did not play
the determinative part most accounts have pictured; his quest was rather for the symmetry of
frames of reference in electromagnetism, and he used only experimental facts that had been
known for fifty years.16 Again, the dissatisfaction expressed by physicists concerning the
large number of apparently unrelated “elementary particles” discovered during the 1950', and
the search for some systematic order among them, is testimony to the rationalistic ideal
among scientists, along with their empirical ideal, These “internal” criteria applied within a
theoretical system are of course, adequate alone, since a set of concepts may be self-
consistent but unrelated to the world.
A third group of criteria deals with the comprehensiveness of a theory, This includes its
initial generality or ability to show under lying unity in apparently diverse phenomena.
Fruitfulness or fertility-the value of a theory for suggesting new hypotheses, laws, concepts,
or experiments-is close to Margenau's “extensibility” and Toulmin's "deploy ability.” Usually
such extension arises from the refinement of development of a theory. For example, the early
kinetic theory of gases assumed elastic particles of negligible size, and it was a simple
modification to make allowance for the finite size of the particles and to assume forces
between them; thereby the discrepancies between the behaviour of gases at high pressures
and the predictions obtained from Boyle's Law could be accounted for.
It should be emphasized that the comparison of theory with experiment is often very
indirect. A whole network of ideas is always tested at once. Margenau speaks of “circuits of
15
Morris Cohen and Ernest Nagel, An Introduction to Logic and Scientific Method (New York: Harcourt, Brace
& Co., 1934), p. 215.
16
P. Schillp, ed., Albert Einstein: Philosopher-Scientist (Evanston, III.: Library of Living Philosophers, 1949),
p. 53.
verification” because it is often necessary to reason from a set of observations through a
matrix of interlocking concepts—some of them far removed from anything observable—
before one is able to draw any inferences tied in again to observations. Moreover, as Copi
points out,17 it is never possible to test an individual hypothesis in a “crucial experiment.”
Only a group of hypotheses and assumptions can serve as premises for a deduction; and if the
deduction is not confirmed experimentally, one can never be sure which of the hypotheses
and assumptions is in error. (One can defend a hypothesis in the face of any given
experimental result by rejecting some other assumption in the group-though beyond a certain
point one may have to introduce so many special ad hoc assumptions that simplicity suffers.)
In practice one usually works in the framework of “accepted” theories, and throws all the
doubt on one new hypothesis at a time. But even this can yield no “crucial experiment” in any
absolute sense, since well-accepted theories have been overthrown, and the hidden
assumptions may be just the ones that should have been questioned, “The structure of science
grows in an organic fashion; . . . the notion that scientific hypotheses, theories or laws are
wholly discrete and independent is a naïve and outdated view.” 18 The process of testing is
contextual and involves constellations of concepts and theories.
No theory can be proven to be true. The most reusable that can be said for a theory is that
it is in better agreement with the known data and is more coherent and comprehensive than
alternative theories available at the moment. There may be other theories which will in the
future meet those criteria as well or better. All formulations are tentative and subject to
revision; certainty is never achieved. The chemist Arrhenius received the Nobel prize for his
electrolytic theory of dissociation; the same prize was given later to Debye for showing the
inadequacies of Arrhenius theory. The concept of parity (spin symmetry), long accepted as a
fundamental principle of nuclear structure, was undermined in 1956. On logical grounds, one
can say that at least one hypothesis of a group is false if from the group one can deduce
conclusion that disagree with experiment; but one cannot say they are true if the deduced
conclusions agree with experiment, for another group of hypotheses might lead to the same
conclusions.19 One can seldom show that a theory has times on mathematical or theoretical
grounds limit the number of possible rival theories (another indication of the importance of
theoretical as well as experimental considerations in science). Yet obviously in many cases
we can have considerable confidence that a theory is a reasonably good approximation. After
17
Irving M. Copl. “Crucial Experiments,” in F., II. Madden, ed., The Structure if Science (Boston: Houghton
Millfin Co., 1960)
18
Ibid., p. 33.
19
See Northrop, The Logic of the Science and Humanities, pp.146f.
all, predictions from nuclear theory, that under certain conditions a rapid chain reaction
would occur, were confirmed; in the New Mexico desert, the bomb went off.
Some philosophers who recognize the impossibility of any final “empirical verification”
have been developed modified forms of empiricism Carnap and Reichenbach20 advocated
calculating the probability that a theory is valid-that is, the ratio of the confirmed deductions
from the theory to the total number of possible deductions from it. But in practice the latter
can never be specified because a theory has an indefinite range of consequences, Popper 21
proposed that ecen though theories are never variable, they must in principle be falsifiable. In
choosing between two theories, he said that the scientist should use the one yielding the
greatest number of deductions that could conceivably be proved false experimentally; for if
such a theory survives empirical testing, he can have greater confidence in it. However, we
would reply that in practice an experimental discrepancy, even though it always “counts
against” a theory, does not have this absolute power to overthrow it, particularly if there are
no alternative theories available. Observations discordant with an accepted theory are more
likely to be dismissed as anomalies or unexplained levitations for set aside for later study,
rather than be taken to a falsify the theory.22
Even such a modified empiricism, then fails to include the variety of criteria that
influence the scientist’s outlook. We must simply acknowledge that, particularly in
comparing alternative theories of wide generality, the criteria we have listed may not yield
any clear out conclusion. Frank states:
We have learned by a great many examples that the general principles of science are not
unambiguously determined by the observed facts. If we add requirements of simplicity
and agreement with common sense, the determination becomes narrower, but it does not
become unique….. There is never only one theory that is in complete agreement with all
observed facts, but several theories that are in partial agreement. We have to select the
final theory by a compromise. The final theory has to be in fair agreement with observed
fact and must also be fairly simple. If we consider this point it is obvious that such a
“final” theory cannot be “The Truth”…. After application of all these criteria, there
remains often a choice among several theories.23
20
Rudolf Carnap, Logical Foundations of Probability (Chicago: University of Chicago Press, 1950); Reichenbach,
The Rise of Scientific Philosophy, Chap. 14.
21
Popper, The Logic of Scientific Discovery; also Karl R. Popper, Conjectures and Refutations (New York: Basic
Books, 1962).
22
Some examples are given in Polanyi, Personal Knowledge, pp. 148-58.
23
Phipipp Frank, Philosophy of Science (Englewood Cliffs, N.J.: Prentice Hall, Inc., PB), pp. 355, 359.
The element of personal judgement enters in the evaluation of the data, the estimation of
simplicity and generality, and the relative importance ascribed to different criteria. Such
assessment occurs not explicitly in abstract discussion but implicitly in practice, especially in
the face of new and controversial hypotheses.
We will-urge, finally, that the’ goal of science is to understand nature, and that the
empirical confirmation of predictions is only one element in the testing of theories. By
contrast, some empiricists assign a central role to prediction; coherence and
comprehensiveness are then justified only because they contribute to the attainment of
agreement with observations. If prediction is the goal, these other criteria are at best practical
maxims introduced for the sake of ease of manipulation or economy of thought. But if under.
standing is the goal- intellectual control rather than practical control-then coherence and
comprehensiveness are integral to the aims of inquiry.
Let us consider the claim that explanation is logically equivalent to prediction. Hempel 24
says that the scientist's goal is to show that an event (whether past or future) is an instance of
a general law (that is, that the event can be deduced from the law plus information about
previous conditions). Explaining a past event, he writes, is always equivalent to showing that
it could have been predicted from its antecedents. This view has been challenged on number
of grounds. For example, Scriven25 points out that the theory of natural selection is an
accepted scientific: explanation, yet few people would claim that from it one could have
predicted the course of evolution. On the other hand, one might from past experience make a
reliable prediction (for example, that radio disturbances will follow a solar fare) which
would not count as an explanation, since no intelligible reasons for the occurrence of the
predicted events are offered.26 The law “red sky in the morning, rain by evening” would,
even if always valid, provide no explanation of rain. A scientist would have no greater
interest than other men in a crystal ball which predicts all events; such an inscrutable but
accurate prognosticator would have great practical but no scientific value.
24
Carl G. Hempel and Paul Oppenheim, ”The Logic of Explanation.” In H. Feigl and M. Brodbeck, eds., Roadings
in the Philosophy of Science (New York: Appleton-Century-Crofts, 1953).
25
Michael Scriven, “Explanation and Prediction In Evolutionary Theory,” Science, Vol. 130 (1959), 477.
26
See also Israel Scheffler, The Anatomy of Inquiry (New York: Alfred A. Knopf, 1963), pp. 43f. Several papers
representative of the current discussion, and a good bibliography, can be found in B. Baumrin, ed., Philosophy
of Science: The Delaware Seminar (New York: Interscience Publishers, 1963), Vol. 1 and 2.
Even though laws allow predictions to be made, it is theories that have explanatory force
because of the intelligibility they yield. They provide a type of explanation or understanding
that even the most complicated prediction-formula lacks. Theories display an extensibility to
new types of phenomena which is not found among laws. Moreover, the scientist is not
satisfied with predictive laws until he gains insight into theoretical structures that can
account for their success the intellectual satisfaction that theories provide is a product of
rational as well as empirical components. Toulmin points out that the Babylonians could
make very precise predictions in astronomy from mathematical time-series tables, worked
out by trial and error with no theoretical basis; they “acquired great forecasting-power, but
they conspicuously lacked understanding,” for the explanatory power of a theory lies in the
ideas which make patterns of relationships intelligible:
The central aims of science lie in the field of intellectual creation; other activities-
diagnostic, classificatory, industrial, of predictive-are properly called “scientific” from
their connection with the explanatory ideas and ideals which are the heart of natural
science…. The central aims of science are, rather, concerned with a search for
understanding-a desire to make the course of Nature not just predictable but intelligible-
and this has meant looking for rational patterns of connections in terms of which we can
make sense of the flux of events.27
In similar vein, Hansin describes science as a search for pattern: “Physics is a search for
intelligibility. Only secondarily is it a search for new objects and facts.”28
In summary, scientific inquiry is a complex process with expert mental and theoretical
components inextricably interwoven. The information of theories depends on the logical
process of inductive generalization and on the creative originality of human imagination. In
evaluating theories, both the empirical criterion of agreement with observations and the
rational criteria of coherence and comprehensiveness are present. The primary goal of science
is intellectual understanding, control is a secondary consideration. This is the broad structure
of scientific methodology, whose distinctive characteristics we must now scrutinize.
Some of these neglected features have been pointed out in recent studies in the history of
science and the sociology of science. Others are evident in the writings of scientist
themselves. It will be apparent below that science is a very human enterprise and shares
many characteristics with other activities in which men engage. We will consider the role of
the scientific community, the symbolic language it employs, and the models and analogies by
which it interprets the world. This will provide the basis for comparison in a later chapter
with the role of the religious community and its models and analogies.
The presence of this community has always been essential to the progress of science. The
Royal Society and the French Academy were important factors in the rise of science.
Communicability is one of the attitudes of scientific knowledge, and the imposition of
secrecy, whether by government of industry, is antithetical to its growth. Communication is
today achieved primarily through journals and professional meetings which are the main
channels for the reporting of results and the stimulation of new work. The organization of
science is highly complex, enmeshed in the structures of government, industry, and
education. It has its own chains of other institutions.
The scientific community, like any group in society, has a set of attitudes which are
influenced by but nor identical with those of the culture at large. Schilling gives a vivid
portrayal:
It has its own ideals and characteristic way of life; its own standards, mores, conventions,
signs and symbols, language and jargon, professional ethics, sanctions and controls,
authority, institutions and organizations, publications; its own creeds and beliefs,
orthodoxies and heresies-and effective ways of dealing with the latter. This community is
affected as are other communities, by the usual vagaries, adequacies, and shortcoming
human beings. It has its politics, its pulling and hauling, its pressure groups; its differing
schools of thought, its divisions and schisms; its personal loyalties and animosities,
jealousies, hatreds, and rallying cries; its fads and fashions.29
“Unorthodox” views may be rejected by the scientific community (as hypnotism was for
many years) or ignored (as the question of extrasensory perception is today by most
psychologists) or tolerated with disapproval (as osteopathy has been by the medical
profession).
It is this set of attitudes and traditions that holds the scientific community together. “Its
members,” writes Polanyi, “recognize the same set of persons as their masters and derive
from this allegiance a common tradition, of which each carries on a particular strand.30 Joint
acceptance of these beliefs, and the presence of common loyalties and commitments, make
self-government possible, so that the authority of the community’s consensus and of
particular prerogatives, such as those of a journal editor, are voluntarily acknowledged rather
than externally imposed. Conant31 points out that coordination of individual research
activities take place for the most part informally through the interaction of the individual and
the community. Preparation for a career in science involves not just memorizing information
and acquiring skills, but coming to share attitudes by participating in the life of a particular
29
Harold K. Schilling, “A Human Enterprise,” Science, Vol. 127 (1958), 1324, See also his Science and Religion,
Chap. 4.
30
Polanyi, Personal Knowledge, p. 163.
31
James B. Conant, On Understanding Science (New Haven, Conn.: Yale University Press, 1947), Chap. 1.
community. This absorption of standards and presuppositions is one result of the research
apprenticeship every doctoral candidate undergoes.
In addition, the scientists in a given field share patterns of expectation. and conceptions
of regularity and intelligibility that govern their work. We noted earlier that as “standard
cases” for discussing motion, Aristotle took familiar objects for which there is considerable
resistance (for example, a cart pulled by a horse). By contrast, Galileo and Newton used
idealized frictionless motion as the standard in terms of which to analyse actual situations;
they saw continued uniform motion, rather than coming to rest, as natural and self-
explanatory (needing no further explanation). Toulmin shows that such “explanatory
paradigms” determine what we take to be “problems,” what we see as “facts,” and what we
consider to be satisfactory explanations:
Science progresses, not by recognizing the truth of new observations alone, but by
making sense of them. To this task of interpretation we bring principles of regularity,
conceptions of natural order, paradigms, ideals, or what-you-will: intellectual patterns
which define the range of things we can accept (in Copernicus’ phrase) as “sufficiently
absolute and pleasing to the mind.”32
According to Toulmin, these changing explanatory ideals are empirical in only a very
broad way, since they cannot be directly confronted with the results of observations. They
prove their worth over a longer period of time, and serve rather as “preconceived notions” for
the individual scientist in most of his work.
T. S. Kuhn has given historical documentation to a similar thesis that the authority of a
scientific community supports a particular set of assumptions by means of its paradigms.33
Paradigms are “standards examples” of past scientific work which are accepted by a given
groups of scientists at a given time. These are the prevailing examples used in textbooks, and
by learning them students acquire concurrently the theoretical concepts experimental
methods, and norms of the field. Paradigms also guide the group’s research, for they
implicitly define what sorts of question may legitimately be asked, what techniques. are
fruitful, what types of solution are fruitful, what types of solution are admissible: Most
scientific endeavour is carried on within the framework of such a “received tradition” which
defines. the kinds of explanation to be sought (thus when Newton's laws are a paradigm,
32
Toulmin, Foresight and Understanding, p. 81. Used by permission of Indiana University Press.
33
Thomas S. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962; PB); for a
reply to Kuhn’s thesis, see Dudley Shapere in Philosophical Review, Vol. 73 (1964), 383.
explanations were sought in terms of forces and corpuscular motions). The tradition
influences the concepts through which, the scientist sees the world, the expectations by which
his work is governed, and the language he uses.
Kuhn suggests that the rare occurrence of a major change of paradigms produces such
far-reaching effects that it can be called a scientific revolution. (Among his examples are
Copernican astronomy, Newtonian physics, Lavoisier’s discovery of oxygen, and Einstein’s
relativity.) A new paradigm requires the overthrow of the old not just an addition to previous
theories. The familiar data are seen in an entirely new way and the old terms acquire altered
meanings. Kuhn compares these changes with the shift in a visual gestalt (for example; when
a sketch of the outside of a box viewed from below is suddenly seen as the inside, of a box
viewed from above). For a brief period, adherents of two different paradigms may be
competing for the allegiance of their colleagues; Kuhn claims that the choice between, them
is not determined by the criteria of ordinary research:
Though each may hope to convert the dither to his way of seeing his science and its
problems, neither may hope to prove his case, The competition between paradigms is not
the sort of battle that can be competition by proofs.... Before they can, hope to
communicate fully, one group of the other must experience the conversion that we have
been calling a paradigm shift. Just because it is a transition between incommensurables,
the transition between competing paradigms cannot be made a step at a time, forced by
logic and neutral experience, Like a gestalt switch it must occur all at once (though not
necessarily in an instant) or not at all.34
There is no higher authority than the scientific community for making such decisions
between paradigms, Kuhn concludes. Of course the ordinary criteria (empirical fit,
intellectual beauty, and so forth) contribute to the choice; but they do not determine it
unequivocally, especially in the early stages when the new paradigm has not been developed
or applied extensively. Often the new conceptual structure entails an altered estimation as to
what kinds of problems are significant, and this cannot be settled by logic alone. Scientists
legitimately resist revolutions, since their previous commitments have permeated their
thinking. Sometimes: a new view is completely accepted only when the older generation has
died off or has been “converted” to it. Thus the choice between competing paradigms is
34
Kuhn, The Structure of Scientific Revolutions, pp. 147, 149. Copyright 1962 by the University of Chicago Press.
See also his essay, “The Function of Dogma in Scientific Research.” In A. C. Crombie, ed., Scientific Change
(New York: Basic Books, 1963).
neither totally arbitrary and subjective, on the one hand, nor completely determined by
systematic rules, on the other. It is a choice which ultimately only the scientific community
itself can make. Hence the corporate context of scientific inquiry is not just a fact of interest
to sociologists and historians, but a feature that should be taken into account in the analysis of
methodology.
It would be easy to dismiss Kuhn's thesis as applicable only to the past history of science;
are not theories today on firmer ground, and unlikely to be replaced by new ones? Kuhn
would reply that to each generation a set of paradigms seems securely established, and only
in retrospect are its limitations evident, Again, one may object, docs not the new set of
concepts in a scientific revolution incorporate all that was valid in the old; does it not account
for all the previous evidence, and more besides? Does not the new and more inclusive theory
often treat the old as a special limiting case, as the equations of Einstein's relativity reduce to
Newton's Laws’ for objects moving at low velocities? But, says Kuhn, revolutions entails the
rejection of the old, not simply the addition of the new, the concepts used by Einstein and by
Newton (mass, velocity, and so forth) do not have the same meanings.
“Now we would suggest that Kuhn has overemphasized the arbitrary character of
paradigm shifts, A paradigm may in practice function in a unitary way to guide a research
tradition; but in reflecting about it we must try to differentiate its various components, for
they are evaluated in varying ways. For example, Newtonian mechanics, as the paradigm of
classical physics, included a group of specific concepts and theories (which are subject to the
criteria previously discussed); it also expressed assumptions as to what constitutes a
satisfactory explanation or a promising research method (the fruitfulness of these
assumptions throughout physics-indeed, throughout the sciences-is relevant here); in addition
it indirectly transmitted certain very general presuppositions about nature (lawfulness,
regularity, intelligibility-which we will consider in the next chapter). Although Kuhn
legitimately attacks the view that science is strictly cumulative, he fails to indicate that even
in a revolution, many features of previous tradition are retained after the shift. Most of the
data obtained and many of the prevailing methods and assumptions are carried over, and
there is progress, though seldom in a straight line. Kuhn's writing (like that of Hanson,
Toulmin, and Polanyi) represents a salutary reaction against the positivism that formerly
dominated the philosophy of science, but he perhaps gives undue weight to the subjective,
relativistic out communal features which earlier accounts ignored.
2. The Symbolic Character of Scientific Language
Every symbol aims to represent its referent, but no symbol is able to portray all of the
features of the referent; hence, it is obliged to omit one more of them. Given any symbol,
therefore, one may infer the referent, since the symbol resembles it, but not all of the
referent since the symbol is an abstraction. … Since the human minds is incapable of
grasping any event in all of its configurations, certain of its relations are more or less
arbitrarily neglected and are not included in the resulting symbol. As a consequence, every
symbol is abstract in its representations of nature; it loses some of nature and hence is not
strictly adequate as a representative.35
Thus the language of every community of inquiry is abstractive and selective and replaces
complex experiences by symbolic constructs and diagrammatic sketches of those aspects in
which it is interested. In physics problems an elephant on a slippery river bank becomes a
mass with a coefficient of friction, and a Beethoven symphony becomes a set of molecular
vibrations. When a field of study -can thus abstract single factors for investigation, it can be
more exact; but its schematic representation of limited aspects is further from the total
situation of life, and from the immediacy and variety of human experience with all its levels
of meaning. The purposes in inquiry determine the kind of symbolic scheme developed.
In the case of atomic physics the relation of the scientific symbolism to the reality
represented is extraordinarily indirect. Here abstract mathematical equations give only the
probability that particular experimental results will occur when given operations are
performed on an atom; no visualizable picture of what the atom might be like in itself is
provided. The abandonment of picturability is one of the striking features of modern physics.
Micro nature seems to be a different kind of reality from the world of everyday experience;
our usual categories are apparently not applicable, so we must use a highly abstract
symbolism. The atomic world is not only inaccessible to direct observation, and inexpressible
in terms of the senses; we are unable to imagine it. There is a radical disjunction between the
way things behave and every way in which we try to visualize them, as we shall see in
Chapter 10. For example, in some experiments we may picture electrons as waves and in
others as particles, but there seems no consistent way of imagining what an electron is like in
itself.
In discussing creativity we indicated that scientific concepts have often arisen from the
exploitation of analogies. Let us define an analogy as an observed or postulated similarity
between two situations, (Two entities are defined as similar if some of their characteristics are
the same and others are different; the similarity may be one of form, function, or property.) As
an aid to inquiry, analogy is the extension of patterns of relationship drawn from one area of
experience to coordinate other types of experience. To the chemist
Analogies and models have unquestionably been a fruitful source of scientific theories. 36
The wave theory of light was developed largely by analogy with the wave properties of
sound. Mechanical models were common in nineteenth-century science; Lord Kelvin asserted
that a person does not really understand something until he has a mechanical model of it. But
the dangers in the use of models also became evident, particularly the tendency to
“overextend” them by assuming that all characters of the analogue will be present in the new
situation, Thus the analogy of light waves with sound waves, which was so useful at one
stage, led to the fruitless search for the “ether,” the assumed medium of propagation; two
systems manifesting a resemblance in many properties were erroneously believed to share
another property: Moreover, since theories were held to be literal descriptions of reality it
was assumed that the object under study was just like the model. It was forgotten that (1)
analogies are only similarities in some but not all characters, (2) models only suggest possible
hypotheses, which must then be tested experimentally, and (3) theories are symbolic and
selective representations.
The dangers in the use of models led some authors to view them as only temporary
psychological aids in the formation of theories. Duhem37 urged that models should be used
with caution and discarded as soon as possible. The ideal theory, he said, would be a
36
E. Farber, “Chemical Discoveries by Means of Analogies,” Isis, Vol. 41 (1950), 20, M. B. Hesse, “Models in
Physics,” British Journal of Philosophy of Science, Vol. 4 (1953), 198.
37
Pierre Duhem, The Aim and Structure of Physical Theory, trans. P. Wigner (Princeton, N.J.: Princeton
University Press, 1954), Pt. I, Chap. 4.
mathematical formalism without any interpretation by a model. This position was associated
with the positivist assertion that theories are summaries of data and not representations of
reality. When the abstract probability functions of quantum theory replaced the Bohr model
of the atom, there seemed added evidence that one should try to get along without
visualizable models. If sets of equations can correlate observations and permit predictions to
be made, why keep models from which misleading conclusions might be derived?
But there have also been vigorous defenders of the use of models. Campbell,38 replying to
Duhem, affirmed that a model goes beyond a formula both in providing an intellectually
satisfying interpretation and in suggesting new ways in which a theory might be extended.
Max Black39 points out that models use language drawn from a domain which is already
familiar; also a model is vivid and grasped as a whole, whereas a set of formulas is too
complex and abstract to provide this immediacy and unity of comprehension. Moreover,
models are often extensible; no one can say in advance when a model may still serve a useful
function or be developed further. We noticed that the kinetic theory of gases, bared on the
billiard-ball model, could account for Boyle's Law. From the fact that Boyle's Law does not
hold at high pressures one might have argued that the model was limited and should be
discarded. Instead it was an extension of the model which was fruitful; consideration of the
finite size of the assumed particles, and the forces between them, allowed the derivation of
Van der Waal's equation for the behaviour of gases at high pressures. And it was the model,
not the formalism, which enabled additional Phenomena (gas viscosity, heat conduction) to
be explained.
In a careful discussion of models,40 Hesse maintains that in general an analogue has some
features which at any given time have to be similar to the phenomenon, some which are
dissimilar and a third group of characteristics whose possible similarity is uncertain; the latter
often give clues for new hypotheses to be tested. ‘Moreover, observed similarities may
indicate possible ways of interpreting previously uninterpreted terms in a formalising, When
the wave theory of light was being developed, it was not evident’ with which observable
characteristics the amplitude and the frequency of the assumed light wave should be
associated. But the analogy between brightness of light and loudness of sound (which was
already known to correspond to amplitude) and between colour and pitch (which corresponds
38
N. R. Campbell, Physics, the Elements (Campbell: Cambridge University Press, 1920; Dover PB entitled
Foundations of Science), Chap. 4.
39
Black, Models and Metaphors, Chap. 13.
40
Mary B. Hesse, Analogies and Models in Science (New York: Sheed and Ward, 1963), Chap. 2.
to frequency) suggested interpretations which further data supported. Toulmin writes: “It is in
fact a great virtue of a good model that it does suggest further questions, taking us beyond the
phenomena from which we began, and tempts us to formulate hypotheses which turn out to
be experimentally fertile.41 Nagel defends not only the pragmatic value of models, but the
contribution they make to the unity of science through emphasis on similarities between areas
of inquiry:
It would be a mistake to conclude, however, that once the new theory has been formulated
the model has played its part and has no further function in the use made of a theory... It
may lead to suggestions concerning directions to be followed in fresh areas of
experimental inquiry, and for clues as to how the formulations of experimental laws need
to be modified so as to enlarge the scope of their valid application.... From this perspective
an analogy between an old and a new theory is not simply an aid in exploiting the latter
but a desideratum many scientists seek to achieve in the construction of explanatory
systems.42
The lesson to be learned from the mistakes of nineteenth-century physicist not that model
must be discarded, but that they must not be interpreted literally. We will see that some of the
confusion about the “wave particle dualism” arose from failure to note the analogical use of
the terms “wave” and “particle” in describing the behaviour of electrons. An analogy is never
a total identity or a comprehensive description, but only a simplified comparison off aspects.
The particular type of analogy that formerly dominated science, namely mechanical anid
visualizable models, has proven inadequate. Quantum physics represents the atom by wave
functions that cannot be visualized; but even, such abstract symbolic systems involve the use
of analogies (for example, Heisenberg’s matrix mechanics was analogous to the Fourier
analysis of the harmonics of waves), but their concepts are only indirectly related to
experimental data and to the categories of everyday experience.
We must now discuss more explicitly how this symbolic and analogical language used by
the scientific community is related to the world and to the experimental data which it is
based. We consider several current viewpoints before attempting to show how an
understanding of the symbolic character of language might allow us to combine what is in
these various schools of thought.
41
Toulmin, Phylosophy of Science, p.37
42
Nagel, Structure of Science, pp. 112, 114. Used by permission of Harcourt, Brace & World, Inc., and
Routledge & Kegan Paul, Ltd.
III. THE RELATION OF SCIENTIFIC CONCEPTS TO REALITY
What is the status of scientific laws, theories, and concepts? How is the language of
science related to the sublet who uses it and the object it purports to represent? We have
indicated that until this century, most scientists assumed a simple realism in which theories
were conceived exact replicas of the world, by contrast, some of the concepts of twentieth-
century physics are only very indirectly related to observations, and cannot be considered
literal representations of objects as they are in themselves. It will be instructive to examine
carefully four philosophical interpretations; the difference between them may seem to be a
somewhat technical matter, but one’s conclusions will influence one’s view of science and its
relation to religion. In positivism, a theory is viewed as summary of data; in instrumentalism,
a theory is a useful tool; in idealism, a theory is a mental structures; and in realism, it is a
representation of the world.
The empiricist tradition, going back to Bacon, Hume, and Mill, has placed emphasis on
the observational side of science. Mach, Russell (at one stage), Pearson, ‘and Bridgman are
among those who have seen concepts and theories as summaries of data, labor-saving mental
devices for classifying observations. “Atoms,” “electrons,” and “molecules” are merely
convenient categories for summarizing and simplifying laboratory data; theoretical concepts
are formulae for giving resumes of experience, They lead to economy of thought, but since
they do not themselves designate anything capable of direct observation they are not to be
considered real. Karl Pearson wrote:
Either the atom is real, that capable of Being a direct sense impression, or else it is ideal,
that a purely mental conception by the aid of which we are enabled to formulate natural
laws…. To no, concept, however invaluable it may be as a means of describing the
routine of perceptions, ought phenomenal existence to be ascribed until its perceptual
equivalent has actually been disclosed.43
There have been two main variants of positivism, To phenomenalists, data mean sense-
data, and all verifiable propositions must be translatable into statements about sense-
impressions, Russell (prior to 1927) tried to develop ways of reducing all scientific
43
Karl Pearson, The Grammar of Science, 3rd ed. (New York: The Macmillan Co., 1911; Meridian PB), pp. 96,
212.
propositions to statements about sensory awareness; if the term “atom” is a function of sense-
data, it should be replaceable by the later whenever it occurs. The physicalist version (for
example, in Neurath and the early Carnap) requires the translation of all conceptual
statements into “thing-language,” that is, statements about Bridgman, all concepts must be
operationally defined cot measured by specifiable laboratory procedures. Impressed by the
way relativity theory had undermined common sense ideas of length and time, he urged the
identification of concepts with performable experimental operations: “The concept is
synonymous with the corresponding set of operations."44
Elsewhere we will criticize the way positivism (as an interpretation of science) was
extended into logical positivism (as a philosophy stressing the “verification principle” and the
rejection of metaphysics, ethics, and theology). Here it can be noted that even attempt to
translate all scientific sentences into sense-data language was never successfully carried out,
and partial attempts produced unmanageable systems.45 We have maintained, in any case, that
man starts not from bare, separate sense-data but from patterns of experienced relationships
in which interpretation is already present. We have suggested that reports of scientific “data”
are always “theory-laden,” for there are no uninterpreted facts, and all language is selective,
abstractive. and. symbolic. There is no “neutral observation language" devoid of
interpretation. We have already had occasion to criticize the view that theories originate in a
process of induction or “summarization of data”; this portrayal simply does not correspond to
the thought-processes of the creative scientist.
The attempt to eliminate all conceptual terms was as difficult to defend in principle as to
carry out in practice. For a concept statement is related to indefinite numbers and types of
possible object-statements. From a theory it may be possible to deduce experimental laws
applying to phenomena quite different from the original data, as we saw with the kinetic
theory of gases. Toulmin points out that a concept (such as “gas molecule”) differs logically
from an observation (such as “gas volume”). As he puts it, theoretical physics is not a type of
classified facts. The whole point of a theory is that it introduces new types of terms. A theory
is taken as an explanation of the phenomena precisely because it makes use of ideas of a
different logical level and has greater comprehensiveness and generality than the phenomena
themselves. Like earlier forms of empiricism, positivism fails to represent the crucial role of
concepts and theories int the history of science.
44
Percy Bridgman, The Logic of Modern Physics (New York: The Macmillan Co., 1927; PB), p. 5.
45
See C. G. Hempel, “Problems and Changes in the Empiricist Criterion of Meaning.” Revue Internationale de
Philosophie, Vol. 4 (1950), reprinted in A. J. Ayer Logical Positivism (Glencoe, Ill.: Free Press, 1959).
2. Theories as Useful Tools (Instrumentalism)
Instrumentalists give a larger role than positivists to the activity of the knower in the
imaginative creation of conceptual schemes. The knower does more than record and
organize; the abstracts, idealises, constructs, and invents. Theories are spoken of as
regulative maxims, principles of procedure, or technique to be employed for accomplishing
desired purposes in scientific investigation. They are fictions in the sense of being human
inventions for coordinating or generating observation-statements, Attention is directed to the
way a theory is used, its function as a means of inquiry. Theories are thus conceived as (a)
calculating devices for making accurate predictions, (b) organizing guides for directing
further experimentation, and (c) practical tools for achieving technical control. They are to
be judged by their usefulness in achieving these goals, not by their truth or falsity.
In this view, scientific concepts are functionally related to observations but need not
themselves be reducible to observations. The instrumentalist believes that the Positivist
attempt to translate all concepts into a set of equivalent data-statements can never succeed
because the fruitfulness of a concept includes its future employment with phenomena at
present unknown; such a translation, even if it could be achieved, would hinder rather than
advance its value as a mental tool. It is pointed out that scientists employ “limiting concepts”
(such as frictionless planes) or concepts without direct rules of correspondence (such as
atomic wave-functions), which refer neither to observations nor to real objects in the world.
Similarly, the retention of models, which positivists usually condemn, is defended by
instrumentalists on pragmatic grounds.
46
Toulmin, The Philosophy of Science. (However, his more recent writings stress the contribution of theories to
“understanding” rather than “prediction.”)
In contrast to positivists, instrumentalists do not require that concepts should correspond
to observables, and they make no effort to eliminate theoretical terms; in contrast to realists,
however, they do not insist that there are real entities corresponding to concepts. Laws and
theories are invented, not discovered: “Do electrons exist?” is not a useful question to ask,
says Toulmin; in scientific language the term is not employed referentially. “It is a mistake
to put questions about the reality or existence of theoretical entities too much in the centre of
the picture. ...”47 A theory organizes the behaviour of nature by what Toulmin calls “as-it-
were” reference to hypothetical models, but does not attempt to describe nature “as it is.” In
similar fashion, Braithwaite proposes that the question of the existence of electrons should
be bypassed completely in favour of asking how the word “electron” occurs in the structures
of theories.
Instrumentalism, with its breadth, flexibility, and interest in the uses of language, is Jess
subject to criticism than was positivism for failing to describe what scientists actually do.
The difficulties here concern the status of theories. Contemporary exponents do not hold that
theories are completely arbitrary, nor do they concur with idealists that concepts originate in
the mind’s structure imposed on experience. But they seldom, give a clear answer to the
question: why do some theories work, whereas others do not? Nagel criticizes
instrumentalism by suggesting that “a theory is an effective tool of inquiry, only if things and
events are actually so related that the conclusions the theory enables us to infer from given
experimental data are in good agreement with further matters of observed fact.” 48 The
usefulness of theories is dependent on objective features of the experimental situation and
not on personal whims. Nagel points out that most scientists see statements of theories as
premises which might be shown to be false, since when taken with initial conditions they
imply statements about observable facts which, might be found to be false. To say that a
theory is “unsatisfactory”. as a rule for inference-drawing, or as a leading principle for
further inquiry, amounts to saying that it is false. Scientists talk about evidence for or against
the validity of a theory, not just for or against its use. Finally, instrumentalism can offer no
objection to the adoption of two contradictory theories if both are useful; yet such a practice
is not followed by scientists, and new discoveries, have arisen from attempts to resolve
conflicting ideas.
47
Ibid., p.138
48
Nagel, Structure of Science, p.134 (italics added)
Idealism goes even further than instrumentalism in accentuating the contribution of the
knower, here the structure of theory are entirely imposed by the mind on the chaos of sense-
data. The philosophical idealism exemplified by Eddington, Jeans, and Milne finds few
supporters today, but modified neo-Kantianism is found in Cassirer, Margenau, and in a
somewhat different form among continental physicists such as von Weizsacker.
Eddington uses vivid imagery to convey the determinative influence he assigns to man's
mind in all-knowledge. He pictures us following footsteps in the sand, only to discover that
the tracks are our own:
The mind has by its selective power fitted the processes of Nature into a frame of law of
a pattern largely of its own choosing: and in the discovery of this system of law the mind
may be regarded as regaining from Nature that which the mind has put into Nature.49
Eddington has attempted to derive both the fundamental laws of physics and the
“constants of nature” from a priori considerations without utilizing any experimental results,
He holds that the characteristics which we think we find in nature are manufactured by
ourselves in the acts by which we observe and measure. By “subjective selection” we have
molded the world into a form we can understand:
The fundamental laws and constants of physiscs are wholly subjective… for we could
not have this kind of a priori knowledge of laws governing an objective universe. The
subjective laws aren’t consequence of the conceptual frame of thought into which our
observational knowledge is forced by our method of formulating it.50
Eddington's treatise is complex and difficult to follow. According to his critics, his
reasoning makes implicit use of many assumptions which came indirectly from experimental
findings, either as specific methods which have been found successful or as postulates of a
very general character (for example, in quantum mechanics and relativity). Whittaker states
that “in effect the epistemological principles were by no means independent of knowledge
derived from sense-perception”;51 qualitative results and the forms of empirical laws were
introduced into the system. Similar criticisms apply to Milne’s a priori approach using the
principle of communicability among observers as the starting point.52 As compared with the
49
50
51
E. Whittaker, From Euclid to Eddington (Cambridge: Cambridge University Press, 1949), p. 185.
52
E. A. Milne, Modern Cosmology and the Christian Idea of God (London: Oxford University Press, 1952).
actual practice of the scientific community, the views of Eddington and Milne neglect the
experimental side, just as positivism neglects theoretical side.
Margenau also gives prominence to the activity of the mind in imposing a structure on
the uninterpreted data. His scheme gives greater recognition than Eddington’s to the role of
observations, but it concurs with the Kantian assertion that chaotic sense-data have no
knowable structure apart from. the activity of the mind which organizes them by its
conceptual constructs. Like the instrumentalists, Margenau acknowledges; the predominant
role of the subject in scientific knowledge; but instead of speaking of theoretical concepts as
useful fictions, he asserts that the construct is reality. Since construct change as our
knowledge grows, this means that reality changes. He asserts that the neutron did not exist
and was not real prior to its “invention” in 1932, Margenau states his conclusion thus:
Science defines a dynamic kind of reality, one that grows and changes as our
understanding grows and changes... I am perfectly willing to admit that reality does
change as discovery proceeds. I can see nothing basically wrong with a real world which
undergoes modifications along with the flux of experience. … It is easy to succumb to
the temptation of distinguishing at the outset between the permanence of physical entities
and the permanence of theories about them, saying for example that the entities are not
affected by the vicissitudes of theories... Our indoctrination with principles of being and
our historic concern over immutables make us want to say that our knowledge of reality
changes when discoveries are made.53
Margenau would reject Eddington's rationalistic attempt to derive theories a priori from
the necessary structures of thought itself apart from experience; “circuits of empirical
verification” have an essential place in his presentation. But it is not clear why, on his view,
there should be agreement between empirical observations and some mental constructs but
not others. The realist answers that such concepts correspond more closely to the structures of
actual events in the world; our concepts may change but physical reality does not.
Against the positivist, the realist asserts that the real is not the observable. Against the
instrumentalist, he affirms that valid concepts are true as well as useful. Against the idealist,
he maintains that concepts represent the structure of events in the world, The patterns in the
53
Margenau, The Nature of Physical Reality, pp. 288, 295, 459. Used by permission of the McGraw-Hill Book
Company.
data are not imposed by us, but originate at least in earn objective relationships in nature. The
object not the subject makes the predominant contribution to knowledge. Hence science is
discovery and exploration, not just construction and invention. Atoms are as real as tables,
though their modes of behaviour are quite different. Among those who have supported some
form of realism—though with differing views as to what constitutes reality—have been
Planck, Einstein, Campbell, Werkmeister, process philosophers (following Whitehead),
naturalists (such as Nagel), and neo-Thomists.54
Realists insist that being is prior to knowing. Despite the fact that descriptions of the
world are in part our creation, the world is such as to bear description in some ways and not
in others. Thus neither the positivist’s restriction of attention to sense-data nor the idealist’s
identification of reality with changing mental constructs is considered satisfactory. Some
realists argue that the prostutation of a world transcending both constructs and data is
necessary in order to explain the “convergence” of scientific findings. Others hold that an
awareness of encounter with nature is present in immediate experience.
The realist challenges the positivist doctrine that the real is the perceptible. He notes that
many scientific entities today, especially in the domain of the very small, are inevitably not
directly apprehensible. Nagel points out that it would be irrelevant even if we could perceive
molecules:
Nevertheless, molecular theory would still continue to formulate the traits of molecules
in relational terms—in terms of relations of molecules to other molecules and to other
things—not in terms of any of their qualities that might be directly apprehended through
our organs of sense. For the raison d’etre of molecular theory is not to supply
information about the sensory qualities of molecules but to enable us to understand (and
predict) the occurrence of events and the relations of their interdependence in terms of
pervasive structural patterns into which they enter.55
Nagel recommends that, for what it designates to be considered real, a concept (other
than a purely logical term) must enter at least one experimental law other than that by which
it is defined. With such a definition he can say that atoms and electrons are real. This
54
Some of the linguistic analysts have moved toward a realist position, e.g., J. J. C. Smart, Philosophic and
Scientific Realism (London: Routledge & Kegan Paul, 1963); P. K. Feyerabend, “Materialism and the Mind-Body-
Problem,” Review of Metaphysics, Vol. 17 (1963), 49; or “Attempt at a Realistic Interpretation of Experience,”
Proceedings of the Aristotelian Society, Vol. 58 (1958), 143.
55
Nagel, Structure of Science, p. 146. Used by permission of Harcourt, Brace & World, Inc., and Routledge &
Kegan Paul, Ltd.
criterion underscores the relational character of scientific terms; earlier we spoke of the,
contextual testing of networks of interdependent ideas rather than separate concepts. To say
that “atoms exist” would then be equivalent to saying that there is satisfactory evidence for
the atomic theory. As Nagel puts it:
Since in testing a theory we test the totality of assumptions it makes, so the rejoinder
continues, if a theory is regarded as well established on the available evidence, all its
component assumptions must also be so regarded. ... In short, to assert that in this sense
atoms exist is to claim that available evidence is sufficient to establish the adequacy of
the theory as a leading principle for an extensive domain of inquiry. But as has already
been noted, this {s in effect only verbally different from saying that the theory is to well
confirmed by the evidence that the theory can be tentatively accepted as true.56
For many realists, intelligibility rather than observability is the hallmark of the real. It is
precisely the organizing power of theoretical structures which shows that they’ correspond to
the structure of the world. Thus Campbell writes:
A molecule is real, and real in the same way, as the gases the laws of which it explains, It
is an idea essential to the intelligibility of the world, not to one mind, but to all... And if
anything is real that renders the world intelligible, then surely the ideas of theories—
molecules end extinct animals and all the rest—have just as much claim to reality as the
ideas of laws.57
Whitehead develops a realist epistemology, both in his discussion of perception and his
treatment of science. He rejects the starting point of positivism. Hume's thesis that knowledge
originates in a flux of fragmentary and disconnected sense-experiences; he is equally critical
of the starting point of idealism, Kant’s thesis that mental categories are imposed on chaotic
experience. For Whitehead, the raw material of experience already has a unity, apprehended
integrally by all our faculties; and this experience includes an awareness of our mutual
interaction with our environment. Only on analysis can we abstract “sense-data” from the
totalities we perceive. We experience coloured objects, not colours. We attend to reactions
and responses, not to isolated mental states. Our primitive awareness is of being in a world,
56
Ibid., pp. 142, 151.
57
Campbell, What is Science? Pp. 106, 108. Used by permission of Methuen & Co., Ltd., and Dover
Publications, Inc.
not of constructing one. Whitehead speaks of a consciousness of ourselves as arising out of
rapport, interconnection and participation in processes reaching beyond ourselves.58
Whitehead affirms “the ontological principle” that the world is to be understood only by
reference to existent beings in and for themselves. The basic constituents of the real world he
takes to be events united in processes rather than separate substances with qualities. Scientific
concepts represent only certain abstract aspects of this network of events influencing each
other; it is “misplaced concreteness” to mistake such abstractions for the total reality of
temporal, process. Thus Whitehead’s realism gives prominence to object rather than the
subject in knowledge, but the role of the subject is by no means omitted, since (a) reality
consists not of things but of events occurring in networks of relations which include both the
knower and the known, (b) knowledge arises not from either subject or object alone but from
a situation of mutual interaction, arid (c) scientific language is symbolic, deriving from the
subject’s selective abstraction from the total situation.
We must now draw together some of the comments in the preceding sections, starting
with the foregoing debate concerning the status of theories. We note first that scientists
usually assume realism in their work. Astronomers, geologists, biologists, and chemists
almost always take theories to represent events in the world. Dinosaurs are held to be
creatures that actually roamed the earth, not useful fictions with which we organize the fossil
data. Presumably there is no change in status as one considers smaller entities; there is no
point at which one could draw any sharp line as one moved from amoeba to virus to molecule
to electron. A virus is assumed to be both “object-like” and real; an electron does not at all
resemble everyday objects, but this does not mean that it is any less real. Even the physicists,
who more than others have been forced to examine the status of their concepts, still speak of
the discovery (rather than: the invention) of the electron. Although scientists are usually
philosophically unreflective, we must nevertheless take seriously the assumptions embodied
in the language of the scientific community. Most scientists understand themselves to be
dealing with the structure of events in the world and not with summaries of data, useful
fictions, or mental constructs. They see science as a path to understanding, not just a tool for
manipulation, prediction, and control. Moreover, their reluctance to adopt two useful but
contradictory theories, and their interest in unifying the concepts of the separate sciences,
58
A. N. Whitebeard, Symbolism (Cambridge: Cambridge University Press, 1928; Capricorn PB), p. 65.
seem to presuppose not only the value of economy of thought but some reference to a world
under investigation.
At the same time we must reorganize the difficulties in any naïve realism which
overlooks the role of man’s mind in the creation of theories. The creativity of human
imagination in the formation of theories was stressed earlier in the chapter. Theories are not
given to us ready-made by nature; there is no simple access to the world as it exists in itself
independently of being known, and mental constructs influence the interpretation of all
experience. These are factors that the instrumentalist rightly emphasizes (though we have
argued that he draws the wrong conclusion from them). A “critical realism” must
acknowledge both the creativity of man’s mind, and the existence: of patterns in events that
are not created by man's mind, It was suggested (Section Il) that scientific language does not
provide a replica of nature but a symbolic system that is abstract and selective and deals with
limited aspects of the situation for particular purposes.
Critical realism acknowledges the indirectness of reference and the realistic intent of
language as used in the scientific community. It can point to both the extraordinarily abstract
character of theoretical physics and the necessity of experimental observation which
distinguishes it from pure mathematics. It recognizes that no theory is in an exact description
of the world, and that the world is such as to bear interpretation in some ways and not in
others. It affirms the role of mental construction and imaginative activity in the formation of
theories, and it asserts that some constructs agree with observations better than others only
because events have an objective pattern.
The only tests of the adequacy of a concept or theory in representing the world are the
combination of empirical and rational criteria discussed in Section 1. If the goal of science is
to understand nature, we can unify the concern for empirical testing found in positivism with
the concern for intellectual coherence found in idealism while avoiding the exclusive
preoccupation of either. The extraordinarily indirect relationship between theory and
experiment in modem physics perhaps seemed to encourage the two extremes. Positivists
were impressed by the unobservability of the entities designated by theoretical concepts, and
they ended by treating only the experimental side as real. But they failed to note: that all data,
are theory laden, and that only networks of theory and experiment can be tested together.
Idealists, on the other hand, were impressed that theoretical physics is a self-consistent formal
mathematical system, and they asserted that the true nature of reality is mental. But they
failed to stress the empirical basis of modern science, which differentiates it from the “self-
evident principles” of medieval science and the “a priori thought-forms” of neo-Kantianism.
Overemphasis on either empirical or rational criteria distorts the character of scientific
activity.
The real is the intelligible, not the observable. As Nagel puts it, “the raison d’etre of
molecular theory is not to supply information about the sensory qualities of molecules.” The
particular type of intelligible pattern sought is indeed always related to empirical evidence,
but a valid concept need not designate something observable or even describable in everyday
language; neither visualizable models nor common sense categories are used in modern
physics. Hesse suggests that we need to broaden our, view of the character of the real:
When the entities of physics refused to conform to the ordinary conditions for the
existence of physical objects—determinate position in space, continuing existence
through time, possession of the ordinary properties of matter, and so on—the natural
reaction was to deny “real existence” to physical entities, and to call them “merely
logical terms in conceptual formulas of calculation.” But if we abandon such a restricted
conception of existence (a conception which has been shown to be untenable, not only by
modern physics, but also by all the philosophical critics of naive realism, from Berkeley
and Hume onwards) we leave the way open for an interpretation of experience which
asserts the real existence of all patterns in nature which are expressed by scientific
concepts correctly used.59
The resulting theories are not guaranteed to be the final truth; any of them may in the
future be amended, modified, or in rare cases, overthrown in a major revolution. Yet
59
Hesse, Science and Human Imagination, p. 151. Used by permission of SCM Press.
scientific theories do have a reliability, and the scientific community does eventually achieve
a consensus, seldom found in other types of inquiry. Although some aspects of scientific
knowledge change, many aspects are preserved, contributing to an over-all cumulative
advance that differs from that of other disciplines. In the next chapter we will examine
science further and compare it with inquiry in other fields.