Foundations of Science
https://doi.org/10.1007/s10699-022-09864-y
Chance and Necessity: Hegel’s Epistemological Vision
J. Nescolarde‑Selva1
· J. L. Usó‑Doménech1 · H. Gash2
Accepted: 2 August 2022
© The Author(s) 2022
Abstract
In this paper the authors provide an epistemological view on the old controversial randomnecessity. It has been considered that either one or the other form part of the structure of
reality. Chance and indeterminism are nothing but a disorderly efficiency of contingency
in the production of events, phenomena, processes, i.e., in its causality, in the broadest
sense of the word. Such production may be observed in natural and artificial processes or
in human social processes (in history, economics, society, politics, etc.). Here we touch the
object par excellence of all scientific research whether natural or human. In this work, is
presented a hypothesis whose practical result satisfies the Hegelian dialectic, with the consequent implication of their mutual reciprocal integration. Producing abstractions, without
which, there is no thought or knowledge of any kind, from the concrete, that is, the real
problem, which in this case is a given Ontological System or Reality.
Keywords Chance · Compossibility · Determinism · Fortuitous interaction · Modality ·
Necessity · Possibility · Probability · Transfinite
1 Introduction
The "doctrine of necessity", later called determinism, goes back to Leibniz and Spinoza.
It denies chance and contingency, and states that necessity reigns absolute in the world.
But while observation and common experience irrefutably reveal randomness, determinism defends reducing chance to the impossibility of predicting. This is partially due to
ignorance of the causation of the phenomenon. This idea of chance-ignorance, of chance
as epistemological, that Spinoza describes, is re-launched by Laplace in his famous Essai
philosophique sur les probabilités, and presents a pseudo-scientific mechanical determinism applied to the entire universe, reminiscent of an ancient fatalism, or rather theological
concepts presented by Cardinal Bossuet. Almost all scientists adopted Spinoza’s idea until
about 1930. For example, Heisenberg’s uncertainty principle supposes a basic change in
the nature of physics, since it goes from absolutely precise knowledge (in theory but not in
practice), to knowledge based only on probabilities. Also, Bohr derived from the principles
* J. Nescolarde-Selva
josue.selva@ua.es
1
Department of Applied Mathematics, University of Alicante, Alicante, Spain
2
Institute of Education, Dublin City University, Dublin, Ireland
13
Vol.:(0123456789)
J. Nescolarde-Selva et al.
of the new mechanics, and from certain aspects inherent to quantum mechanics, ideas that
evoke or suggest that ordered matter derives from a previous underlying state in which
matter is in permanent disorder or ruled by chance. A thesis that, historically, was already
proposed by the pre-Socratic philosopher Democritus, the creator of the term atom), and
that at this scale only follows the laws of probability, which would lead one to think that
the entire universe is based on chance from its atomic or subatomic level. Against this
determinism, Cournotin in the mid nineteenth century says, mainstream science depends
on the objective reality of chance and its compliance with the principle of sufficient reason.
Chance is defined as the contingent encounter of various chains of causes, independent of
each other. It underlined the disproportion between the smallness of the causes and the
greatness of the effects.
In the early twentieth century, Poincaré adopted these ideas, particularly this view of
reality, adding large numbers and the extreme complexity of the causal chains as an objective feature of these phenomena. The philosophical doctrine of neopositivism exerts a clear
antideterminist influence, namely the Copenhagen School and the School of Vienna. The
reality of chance is recognized with contingency and its logical empiricism background.
However, as difficult as it is for the deterministic doctrine to think about how the connection between chance and necessity arises in the world, an anti-deterministic view has difficulty reconciling pure chance with pure necessity. Moreover, Von Neumann totally denied
necessity and affirmed the importance of chance in the Universe. For this completely indeterministic school, with Popper one of its main figures, necessity is based on random
and fluctuating statistical regularities. What is serious in this theory is not just this last
assertion, but that chance is considered an emergency ex nihilo, an absolute and "free"
creation of nature, with an absurd confusion between chance and freedom. Deterministic
people reject pure chance. For Einstein it is a "reprehensible absurd" and for Langenvin it
is "intellectual shamelessness". However, the neo-positivist doctrine has a degree of truth
which is that randomness (and its opposite, necessity) is an undeniable fact of observation,
and thoughtful experience, and therefore and as such, integrates reality. The opposition of
chance to necessity is the absence of order, rule, law, and consequently the impossibility of
predicting, while certain expectation comes from objective necessity, demonstrating this by
order, rule and law. However, such assumptions cannot be raised within determinism without the danger of denying itself.
Chance undoubtedly forms the background of evolution, in the same manner in which
selection constitutes the filter, which rejects candidates that do not fit or are not viable in
the circumstances. Consider Mendel’s Laws of Assortment and Segregation-chance plays
a major part in the resulting composition of gametes as the process of their production in
genetic terms, less so in cellular terms perhaps, is chance. Darwin in his book The Origin
of Species, published in 1859, explains the mechanism of evolution -which he calls Natural
Selection- based on a relationship between all biological forms. There would be a source
of biological diversity that would produce genetic changes, later these new forms would be
selected by nature through the process of natural selection. Those changes due to chance
would be regulated by the viability of each living project as it tries to perpetuate itself in
the environment. The variations would be discrete and continuous for Darwin, there would
not be great changes from one form to another and there would always be links between
the different species. This biological mechanism would explain, according to Darwin, the
appearance of man. In nature, the strongest species would be those that survived, while
the species that showed a certain degree of weakness would be eliminated as less viable.
Jacques Monod, Nobel Prize winner and author of Chance and Necessity (1971), maintains that the organism is subject to selection not only from the outside, but also from the
13
Chance and Necessity: Hegel’s Epistemological Vision
inside. The organism internally selects whether a mutation is admissible or not. Gabriel
Dover goes further and argues that when the organism internally makes the decision of a
mutation, it also assumes other decisions that allow the viability of the first one. So if the
giraffe decides to lengthen its neck it is compensated by a stronger heart needed to pump
blood up to the head. The problem that arises is that the vast majority of artificially caused
mutations in organisms are harmful to themselves, according to these two authors, so a
control would appear regulating these sudden genetic changes in the organism to alleviate
the potential damage.
Thus, chance and necessity jointly structure the development of our world. However,
the views of scientists and philosophers are divided on the question of not only the course
of evolution but the deepest structure of reality. These are two different and antagonistic
conceptions that have filled whole libraries of controversy. Roughly and as referred to the
evolution of complex systems, we will present the arguments:
(1) Defense of chance We have sufficient reason to believe that all living beings on Earth
have evolved from a single source: universality of the genetic code and isomerism
of biological molecules. We must be aware that chance is a fundamental factor in
the evolutionary process. Since we are not able to see a purpose or a specific need in
the random ramifications of the genetic origins of species, we can only conclude that
the emergence of life on Earth and its subsequent evolution into man is the result of
a lucky combination of random factors. Therefore, it is impossible that our Universe
will re-recur by matching many random events. So far, it is unjustifiable to attribute a
special place in Nature to our world. Neither our society nor our planet is the center of
the universe. The idea that man occupies a central and exclusive position is the result
of elitist speculation and an attitude reminiscent of the imperial past of the European
of the nineteenth century. There was a passing reference to the ancient Greek concept
"the idea that man occupies a central and exclusive position…" which is why the Earth
was placed at the center of the universe, self-referent extending from self-environment
universe. Following that line of thinking, man was thought to be the only thinking being
in the universe. In cosmology in recent times, the anthropic principle establishes that
any valid theory about the universe must be consistent with the existence of humans.
In other words: "If certain conditions must be verified in the Universe for our existence,
these conditions are verified since we exist."(Barrow, & Tipler, 1986). On the contrary,
we refuse to see in nature more than blind chance and evolutionary processes and we
strongly reject any combination of scientific thought with religious ideas.
(2) Defense of determinism There are aspects of developments, which have been classified
as inevitable processes derived from the internal dynamics of systems and although
there are still some questions, there is no reason to enrich science with "miracles". That
would not be proceeding with scientific rigor. That all creatures on the planet have
certain common characteristics only goes to show that at some point a higher organism emerged from which all others then derived. Evolution is clearly directed toward
a purpose, and most likely it is the result of a continuing need for viability. It imposes
a direction to evolutionary selection. From all this we infer that there are other planets
in the universe in which complex systems have been formed and intelligence is an
inevitable consequence. Living beings need a Universe and affect their own evolution.
If we look in the literature of probabilities, reflection on the nature of chance is rare or
treated with light and often incoherent considerations. For many authors, chance is a
13
J. Nescolarde-Selva et al.
category of thought, and not a category of Reality. The method used by most works on
Statistics or Probability Theory shows a number of theorems and applications. We find
a clear rejection of thinking about the problem itself that is very typical of modern science, which worries about applications of the case. In Landau and Lipftzitz (2011), for
example, microphysical chance or "molecular disorder" that belongs to the essence of all
thermodynamic processes does not appear, even as a synonym. In a body composed of
an enormous number of corpuscles, if it were possible to write the general solution of
the mechanical equations, it would still remain essentially impossible to know the coordinates and initial velocities of the corpuscles. In such a body, qualitatively new laws are
appearing, called statistics, which come precisely from the presence of a large number of
particles and cannot be reduced to purely mechanical laws. The authors make an eclectic
mix of many corpuscles, which is a reality, but without autonomy and therefore contingency it explains nothing, with the practical impossibility of predicting the movement of
the entire body, which is an epistemological fact. On the contrary, some authors such as
Parzen (1960), devotes an entire chapter to "discuss the nature of probability" presenting
theory and a study of mathematical models of chance phenomena. He defines this kind of
phenomenon with a statement that under a set of given empirical conditions and identically
reproducibility, the result observed cannot always be the same, "so that there is no statistical regularity," but rather the results differ, so, however, presenting a statistical regularity.
Here, chance presents itself as an objective and, moreover, as opposed to necessity. Parzen
provides the example of an automobile accident, paying attention to the many conditions
and the fact that even a slight change of any of these conditions can greatly alter the nature
of the accident or even prevent it completely.
It must be emphasized that all authors considered here have based their reasoning on
the "deliverables" of experience marked by chance, without defining the notion of possibility and its relation to contingency and necessity. Starting, for example with the work
of Jacques Bernoulli in the eighteenth century on probability, science after more than two
centuries has been incapable of dealing with the problem of chance and necessity, not only
in terms of considering it properly, and has finished with an indeterminism. Is there more
truth in Aristotle’s Physics, than in all publications of the past two hundred years? However, the primary issue of Science, both from a purely theoretical point (the principle of
reason), is how to find the most practical outcome.
These are the two alternatives. The defenders of chance say that advocates of determinism are merely "disguised believers", unable to distinguish between "knowledge" and
religious hopes; the sense given to the Universe is a projection of subjective sense, since
the universe itself is unrelated to any sense, if not meaningless. The point that determinists
are "believers" and having replaced God by chance and thus become "crypto-theists" is an
important point as many scientists do not reflect on their own practice and the role of belief
in their work. Such an attitude is due to the subjective need to bridge man and nature, and
this must have the same roots as any other religious hope. While determinists are not completely misguided, they overlook two questions:
(1) Religion did not discover the basic laws of reality.
(2) Supporters of both chance and determinism are "believers". The difference is their
belief system.
Believe in evolution without purpose is to some extent, tragicomic. The quest to overcome
an archaic and elitist notion that man is the crowning goal and plan of Creation is replaced
13
Chance and Necessity: Hegel’s Epistemological Vision
Fig. 1 The quadrupole of modality
Necessity (n)
Im possibility (i)
Contingency (c)
Possibility (p)
by a new elitist attitude: to believe in human uniqueness in a cold, silent and meaningless
universe. Moreover, that again makes us something special and the culmination of evolution. The difference is that we have replaced God by chance. Not much progress has been
made! With this conception, we continue to accept the "miracle"; all we have done is lost
the sense of our existence in a vicious circle of notions and ideas. Neither of these theories
is demonstrable. The most we can do in this paper is to reflect from the viewpoint of the
logical-mathematical concepts of chance and necessity, and of course, probability.
2 Modality
If we ignore existence, the other four categories of Aristotelian logic are: necessity, contingency, possibility and impossibility. If we examine dictionaries of any natural language,
these terms are not defined separately, rather synonymous words or expressions are used.
As for Hegel’s (2010) discussion on modalities, it is one of the most abstruse passages of
his work. For the German philosopher (as well as Aristotle) the ontological categories of
reality are logically linked intimately so that each one leads to the other three.
Modality is a unique quadrupole concept, which can be outlined as follows (Fig. 1).
Let ⇕ be mutual exclusion and ⇔ be mutual implication. The two notions of each column (n and c, i and p) form a modal opposition, i.e., excluding extension and engaging
in comprehension (Nescolarde-Selva & Usó-Doménech, 2012;; Usó-Domènech & Nescolarde-Selva, 2012; Usó-Doménech et al., 2022), i.e.,:
n ⇕ c, n ⇔ c
i ⇕ p, i ⇔ p
(1)
The quadrupole comprises therefore two dipoles. The two notions of a line (n and i, c and
p) do not constitute an opposition. The first line (n and i) belongs to the sphere of necessity.
If A is an event, a fact of the phenomenon, an object property, we have in classical logic:
Ai ⇔ ¬An , An ⇔ ¬Ai
(2)
that is, the impossibility of A ("It rained last night") is equivalent to the need for non-A ("it
did not rain last night"). The impossibility manifests the necessity for the denial and for
this negativity only impossibility is opposed to necessity.
13
J. Nescolarde-Selva et al.
With respect to the second line, (c and p) belongs to the sphere of possibility, given
that the contingency involves pluripossibility. In addition, therefore compossibility1 A and
non-A:
(
)
Ac ⇔ Ap ∧ ¬Ap
(3)
For example, if we roll a die, it is as possible that 6 comes out as it does not come out: a 6
coming out is contingent.
Conversely, the possibility of A either goes together with non-A, and A is contingent
or not, namely non-A is impossible, then A is necessary under ¬A ⇔ An . In the end, and
always in classical logic:
p ⇔ (n ⇕ c)
(4)
Let’s look at the first diagonal (n and p). We have just established that the necessity of
A excludes non-A, and is equivalent to unipossibility of A, therefore classical possibility
involves:
n⇒p
(5)
But it is a univalent possibility, against the pluripossibility of the contingency. Conversely,
the possibility weakly precludes necessity due to its partial consubstantiality with contingency. Furthermore, possibility also is weakly opposed to contingency, for it is partially
identical with necessity.
Regarding the second diagonal, (i and c) contains a strong modal opposition: impossibility, as negative necessity is totally contrary to contingency. Finally, impossibility is
strongly opposed to the other three poles of the concept: an impossible thing is expelled
from reality, while the other three poles remain within it. And this last ontological opposition does not fit classical logic. If we return( to contingency,
it is necessary to distinguish
)
radically the conjunction of the possibility Ap ∧ ¬Ap and the possibility of the conjunction (A ∧ ¬A)p .
1
Compossibility is a philosophical concept from Leibniz. According to Leibniz a complete individual
thing (for example a person) is characterized by all its properties, and these determine its relations with
other individuals. The existence of one individual may contradict the existence of another. A possible world
is made up of individuals that are compossible-that is, individuals that can exist together. Leibniz indicates
that a world is a set of compossible things, however, a world is a kind of collection of things that God could
bring into existence. For not even God can bring into existence a world in which there is some contradiction among its members or their properties. When Leibniz speaks of a possible world, he means a set of
compossible, finite things that God could have brought into existence, if he were not constrained by the
goodness that is part of his nature. The actual world, on the other hand, is simply that set of finite things
that is instantiated by God, because He is the greatest in goodness, reality and perfection. Naturally, the fact
that we are here experiencing this world–the actual world–means that there is at least one possible world.
Leibniz thinks that there are an infinite number of possible worlds (Brown, 1987). Compossibility is linked
to this idea is the infamous ’omnipotence paradox’. In our universe there cannot exist unlimited unbridled
omnipotence. All omnipotence is constrained within logical consistency. Atoms cannot acquire neutrons
and ’become’ the original element, for example. God cannot act outside of his ’nature’ as first raised in the
sixth century CE by Dionysios (the pseudo-Areopagite). It may be said that "The Laws of Logic are more
difficult to break than the Laws of Physics". The idea that God can only act in the realm of His nature is not
merely a fudge by theists to answer the omnipotence paradox. This criticism applies equally to any entity
claiming omnipotence e.g., an alien with abilities which appear to transcend our own including the ability
to manipulate time and matter.
13
Chance and Necessity: Hegel’s Epistemological Vision
For example, in roulette output 3 and 7 are composible, while the 3 and 7 at once is
impossible. However, this indicates the discrepancy between objective contingency and
subjective foresight, if we denote by A the judgment "7 will occur" which is a forecast, or a
bet, the conjunction A ∧ ¬A of two contradictory forecasts is impossible and meaningless.
Indeed, there is a player who bets at once on 3 and 7. We agree to attribute how the truth
value of a bet A on the probability p of the event A planned, a convention that is entirely
plausible.
The contradictory proposition A ∧ ¬A2 (Nescolarde-Selva, Usó-Doménech and AlonsoStenberg, 2015; Usó-Doménech et al., 2014; Usó-Doménech et al., 2015; Usó-Doménech,
Nescolarde-Selva, Gash and Sabán, 2019; Usó-Doménech et al., 2022) then will have the
truth value p(1 − p): it will not be false. Therefore, the provisional reasoning based on probability comes from paraconsistent logic, even when the objective possibilities conform to
classical logic.
In the sphere of necessity, the necessity for A no longer leads to the impossibility of
non- A, because if it did, A would be real, non-A would be unreal and A ∧ ¬ A also would
be unrealistic and the contradictory proposition that would express this conjunction would
be false. The only solution is that A, ¬ A, A ∧ ¬ A are both necessary. Analogously, in the
sphere of possibility, A, ¬ A, A ∧ ¬ A must be compossible.
As a category, because it is a general universal (Hegel, 2010) necessity implies contingency because it forms with it a dipole, and because necessity is synonymous with noncontingency and vice versa.
Consider the specific, individual and concrete necessities for the categories that the
various sciences establish in the form of relationships, rules, laws, principles, etc. Always
come into play at least one general rule, these necessities are always related to thinking at
the conceptual level, never the level of immediate experience. Stuart Mill and other more
consistent positivists, have attempted to reduce causality to a pure empirical regularity of
succession, but they have not admitted that manifested unique sequences mean necessity.
Only rational knowledge can grasp necessity and logical categories and the Aristotle adage
"science is only about that which is general," here is a perfect verification. However, in
most cases, the determined necessity operates and rules over the extension of a general
rule. Let us do a counterexample. The famous syllogism Socrates ⇒ man ⇒ mortal, we
distinguish the first implication. Socrates, individual, has no trace of generality, nor of any
extension. As for the general man in this comprehension that affects necessity, the necessity is foreign to the various individuals of the general man, to the qualitative range of its
extension.
Instead, consider the implication fish ⇒ vertebrate. If the consequent performs the same
role performed by "man" previously, the antecedent, which is particular and not an individual, there is still a general rule, and this time it is on its specific extension, where a necessary implication operates. We are trying to say that in reality, especially considered experimentally, any species of fish, and any individual fish fulfill this necessity. Put another way,
the implication contains in itself a field of possibilities (hence contingencies) consisting of
the extensive diversity of its antecedent. The unipossibility, which belongs to necessity,
2
The Ionian contradiction is the compound proposition P ∧ ¬P that is the logical conjunction "P and not
P". The Coincidentia oppositorum proposition and designated as k(ℵ), is an Ionian contradiction P ∧ ¬P
whose truth value is always 1, i.e., v(P ∧ ¬P) = 1. A Coincidentia oppositorum proposition is formed by
two poles: the pole of assertion P and the negative pole ¬P . The polar propositions are the proposition P
and its negation ¬P , constituents of a Coincidentia oppositorum proposition k(ℵ)
13
J. Nescolarde-Selva et al.
concerns only the consequent, that is, the character of the vertebrate and, in the antecedent, only concerns the general stricto sensu of fish rather than their comprehension. Thus,
within the necessary implication, the fish species is contingent and also the individual fish.
In Mathematics, examples abound: conic curve ⇒ quadratic equation. The consequent
is unipossible. The antecedent, on the contrary, contains the compossibility of the circle,
ellipse, parabola and hyperbola. Regarding the necessary quadratic equation, the particular
species of conic are contingent. It is important to name the functions that belong by definition to necessity. Within the law f (x) = y, unipossibility concerns precisely the function f,
the compossibility field is the set to which x must belong, i.e., quantitative diversity (and
not essentially qualitative, as in the cases that are not mathematical) of extension of this
general x. Within the law f (x) = y, x has numerical precision, and their quantum is therefore contingent.
Notice that.
Hypothesis 1 The dominion of compossibility and its role in contingency determines
precisely limited and determined necessity, tracing the border that separates it from
impossibility.
Which is not at all separated from arbitrary possibilities, of closely delimited contingency surrounded by necessity which, in turn, limits it. In this manner, a quadratic equation
eliminates any strange conical curve as impossible.
Conversely.
Hypothesis 2 Any specific contingency involves certain necessities, which strictly determine their field of compossibility.
3 Chance, Contingency and Autonomy
Hypothesis 3 Chance comes, in principle, as the efficient intervention of contingency in
production.
In turn, necessity, within the process of production or of causality consists of causes and
conditions. The first (causes) are essential, the second (the conditions) not so. In the physical world, as Leibniz says (1991), causes are specifically subject to quantitative equivalence with effects, which is not the case with conditions. Thus, the tap’s movement is not
the cause of the water spill consisting of fluidity and weight, but only a necessary condition
(sine qua non), specifically it is a necessary condition for the fluid spill. And it is important to note the vast disparity that can exist between the energy of the maneuver and the
energy of the liquid stream. This distinction between causes themselves on the one hand
and conditions on the other side, is convenient to notice, since it is precisely those conditions that open the way to chance. In general, when causes and conditions are integrated
"things come into reality" (Hegel, 2010). However, when only one condition is lacking,
the thing is within the sphere of possibility, and then, when the field of compossibility
is most abundant for that condition, and the smaller the minimum energy required for it,
the road of chance is more widely open. However, a chance event is still an event, and has
an energy requirement which still has to be met to take place. Thus this is a measurable
parameter. In an infinite sequence of actions, every possible outcome will happen, but in
13
Chance and Necessity: Hegel’s Epistemological Vision
an infinite sequence, there is no end so how an outcome may be experienced. This is the
infinite monkey theorem stating that a monkey pressing random keys on a keyboard for
an infinite period of time will eventually be able to type any given text. In the Englishspeaking world, Shakespeare’s Hamlet is often used as an example, while in the Spanishspeaking world, Cervantes’ Quixote is used. In this context, the term is almost certainly a
mathematical term with a precise meaning, and the "monkey" is not actually a monkey, but
rather a metaphor for the creation of a random infinite sequence of letters. But this ignores
the physical/biological limitations of the chimps and the resources required to keep an infinite number of typewriters in operation. In terms of energy and matter-it is ’all consuming’.
The field of contingency belonging to a condition is quantitative, and random processes
transform it into a quantitative range of possibilities. This is the case for random mechanical games such as heads and tails, craps, roulette, etc. The first two, exploit an unstable
equilibrium: one that of a homogeneous thin (theoretically infinitely thin) disc (currency)
spinning above a plane. In the case of the die, it is a cube leaning on an edge or a vertex.
These balances are impossible, and practically of zero duration. The minimum quantitative modification (in theory, infinitesimal) of the position of the disc or cube from such a
balance, leading to a fall, either on one side or on the other side: the change is amplified
in this way. It follows that a very small variation affecting the release of the disc or cube,
accentuated by the impact and friction against the plane, determines a qualitative difference
on the final effect. The balance of the roulette wheel has to be indifferent, and friction must
be very weak, so that a tiny change in momentum, enough to make the wheel spin before it
stops, will result in a few numbers more or less. There is also an amplification and conversion of quantity into quality.
In these previous examples we find the "small causes and big effects" which have been
named by many scientific authors. However, for them, the amplification does not belong
to the field of chance; it integrates the contrary, the necessity of the process, as seen in
cases of unstable or neutral equilibrium. Nor do they operate amplification on all stochastic
processes. The truth is that without the contingency of "small causes" there is no chance.
The question is, why is this contingency manifested? It is in the absence, within the "grand
effects" of an order, or of any law of succession. It is worth pausing at this point.
We assume rolling a die, for example 100 times. The hundred consecutive numbers
drawn, make a certain sequence. The combinatorial algebra states that there 6100 possibilities, i.e., close to 10080 different possible sequences. It is an extraordinarily large number.
Now we throw the die 1000 times. In each series, we get a sequence, necessarily belonging
to 10080. What is the probability that we have to have two identical sequences after a thousand series? The problem is the same as the probability, after two dice rolls for example,
to have drawn the same number twice, except that the number of possible cases here is
incomparably greater: 6100 for the sequence, 6 for the number.
It is important, to understand the problem of chance, to realize the enormity of the numbers of accidents or eventualities in the combined possibilities. Without a reflection of the
quantitative characteristic of contingency, we cannot conceive chance clearly and precisely.
It is necessary to make two postulates:
1. Postulate 1 Physical compossibility coincides with mathematical compossibility.
2. Postulate 2 Autonomy is a fundamental characteristic of chance. There is no hysteresis.
Each phenomenon must be considered on its own as though it occurs for the first time.
13
J. Nescolarde-Selva et al.
Combinatorial algebra gives the following answer: there is only one chance in 2 × 1079914,
after 1000 series of one hundred die rolls, of finding twice the same sequence in 100 numbers. Or again: against a chance of duplication of a sequence, there 2 × 1079914 possibilities
of non-duplication. Such a large number means a practical rather than a theoretical impossibility of duplication, however much of the rest of our lives we spend rolling the die 100
times. Regarding the logical impossibility of duplication, we see it as follows: it emerges
as a limit, as an actual infinite when we indefinitely increase the number of series of 100
runs. Contrary to what one would think at first glance, when the number of series runs
grows (10,000, 100,000, etc., instead of 1000) the ratio of the number of coincidences of
non-duplication regarding counter coincidences becomes much larger, so that the practical impossibility of duplication is enhanced more and more. When there is an indefinitely
increase in the number of series, meaning that the possibility of duplication is lost amid
an infinity of contrary possibilities, which means, as a result of probability theory, a rigorous impossibility of such duplication, specifically because of the abbreviated concept of
potential infinite, which identified, a number that is as great as 2 × 1079914. That is, 1000
series 100 die rolls, broadly enough to produce one practical impossibility and a theoretical impossibility, i.e., physically, constituting the actually infinite (Usó-Doménech, Nescolarde- Selva and Belmonte-Requena, 2016) where the non-duplication is going from the
possible to the necessary, and duplication going from the possible to the impossible. The
conclusion is therefore that in each series of 100 runs, one hundred consecutive numbers of
the die will form a sequence never made before and also would not be repeated later. And
given that there is autonomy, each set’s results are physically independent of the others,
and consequent sequences do not obey any rules of succession, and a fortiori, the numbers
that compose them do not either.
However, in this reconstruction of the phenomenon of "die rolls" the experience verifies
the impossibility of predicting the number, or the sequence of 100 (or n) numbers. If an
impossible forecast is not chance, then it is a necessary epistemological consequence and
it is an existential proof (and secondarily an application to the game). The lack of order
of succession, while denying necessity, affirms contingency and, indeed, this disorder has
been derived from the qualitative and quantitative analysis of compossibility on the basis
of the requirement of independence.
Hypothesis 4 Disorder is a manifestation and an experimental test of the two basic fundamental aspects of all random phenomena: contingency and autonomy.
Now back to "small causes". Through their effects, they are, at least, the manifestation
of their contingency and their own autonomy. First, there appears the inability to repeat
identically the die roll sequence, and second, there appears the contingency and the autonomy of faint inequalities in the roll of the die, which are devoid of regularity of succession.
The die roll is the cause of the movement and not a condition of it, but it is an external
cause to dice-table system, and its small quantitative variations, of physiological human
origin are the external conditions.
Hypothesis 5 By the precision that has the "same" measure repeated many times, slightly
different results are always reached, without regular succession.
This is the case of the metrological chance, to which Gauss first applied the theory of
probability. This, on the other band, also indicates the contingency of chance, and the lack
of order in the spatial distribution. Take the example of impacts of alpha corpuscles that
13
Chance and Necessity: Hegel’s Epistemological Vision
emerged from a fraction of a milligram of radioactive salt and located a few centimeters
away from a square photographic plate of 1 square centimeter area, divided into compartments of a hundred square millimeters. Suppose that the corpuscles arrive in series of 1000
and after each series, the plate is changed. If consecutive photographs are examined, one
can point to the absence of any regularity in the distribution of the 1000 particles among
100 compartments (there are 102,000 possible arrangements). We see disorder indicating
contingency of the direction of movement of the alpha particles, at the time they are issued,
and the autonomy of the successive events.
4 The Random Interaction
Cournot is the first author who described this other aspect of chance: the fortuitous
interaction (Cournot, Robinet and Bru, 1984). We enter the problem through an example. To boil water one needs a temperature T determined by the ambient pressure p and
a continuous flow of heat energy. The nature of the necessary heat source is contingent.
Suppose that this source is an electric cooker and suddenly the current is interrupted
because of a distant thunderstorm or a broken power line or the power plant (nuclear)
initiates a technical stop, or there was a strike in the sector. The cessation of the process of boiling, which follows immediately, is produced by way of contingency inherent
to the law concerning boiling. Generally, the law remains contingent on the nature of
the heat source, but each individual production of the phenomenon has to realize this
nature. Then the very causes and conditions of the heating medium used, rather than
being foreign to the general necessity for boiling, become obligatory in conditions of
the general phenomenon. Therefore, the distribution of the current is central to integrate
the external conditions (the environment) for boiling water, because we use an electric
heater which is network dependent.
Hypothesis 6 The duality of necessity and chance are linked strongly with general and
individual cases.
Under another aspect, the supply disruption (due to a strike, for example) interrupting
boiling is an example of Cournot’s idea: the interaction of two independent causal chains.
The matter states which of two phenomena are independent and which are not, since they
come into correlation. Are their respective needs determined, the one physical, the other
social order, indeed, are they mutually independent, foreign, and from this point of view,
the formulation of Cournot seems quite fair. Interdependence is contingent. On the physical side, the contingency of the electric heating mode has been shown; we will add that
even putting it this way, it is not only necessary but also possible that stopping the current
is due to a strike. For the social fact, it is not necessary, but only possible, that it is a strike
which leads to the suspension of the current. Thus, two processes independent of their own
needs, which are independent, become interdependent to individuate, through compossibility fields, or contingency zones inherent in their respective operation.
13
J. Nescolarde-Selva et al.
Hypothesis 7 We could represent each process as a nucleus of needs amid a halo of contingencies, and random interaction as a kind of interference between the two halos, leaving
the nuclei always independent.
Obviously, like every interaction, fortuitous interaction requires certain conditions of
coexistence: a breakdown of electricity in Georgia (USA) yesterday may not put an end to
boiling water in Alicante (Spain) today.
Definition 1 Autonomy is the unity of independence and interdependence.
Strikes and boiling considered in particular are autonomous, and considered in the
abstract as independent. Here we see also two criteria of the concept of chance: autonomous coexistence and contingency. We can think of events like the fall of a meteorite on
a jungle creating huge fires with corresponding effects of pollution, the destruction of
the Twin Towers by Islamic terrorism, creating an economic recession and leading to a
historic upset of gigantic consequences, etc.
Fortuitous interaction plays an essential role in so-called statistical physics, that is,
within thermodynamic processes. Consider a real gas. If molecules are not exchanged in
momentum and energy, it would be impossible to explain the phenomenon of the propagation of pressure and temperature (particularly the propagation of sound) through the gas. A
first mode of exchange consists of a remote interaction (with respect to molecular dimensions), the Van der Waals’ electrostatic attraction. For various reasons, that attraction plays
a role in all secondary interactions. Especially since at ordinary pressure, and even more
when the gas is rarefied, the exchanges determined are quantitatively negligible. The Van
der Waals’ attraction comes from pure necessity. The second mode is the proper interaction
of the collision; it is short, repulsive, and is performed by molecules passing side by side.
This attraction ensures, usually at a hundred percent, the expansion of the pressure and
temperature within the gas. However, the collision is fortuitous and, indeed, if statistical
thermodynamics continually uses probability, with a remarkable experimental success, it is
precisely because of this fortuitous interaction of intermolecular collisions.
Let’s start with the case that there are only two molecules of gas within the container.
Outside their collision, their movements are largely autonomous, almost independent. Current physics implicitly recognizes molecular autonomy through the notion of "statistical
independence." When the distribution of N gas molecules between the two halves of a container (in the case that no outside interaction such as gravity) is calculated, it shows how
a single physical encounter needs a simple combinatorial calculation of the distribution of
N objects between n compartments (n = 2). Just as in the case of dice, and with the same
degree of autonomy. But how our two molecules are constrained to coexist within the same
container and the possibility of meeting is thus open: in itself, coexistence may already
imply a (general, still undetermined) possibility of interaction. In respect to each molecule,
the presence and movement of the other and therefore the collision are merely external
and contingent. We are here in an equivalent situation to boiling water and the strike. It
is necessary to clarify that the extremely small possibility of the encounter is revealed in
this calculation: in a one-liter container, against an accidental collision there are about 1021
possibilities.
This short review is enough for us to reject the claim that the large number of corpuscles and the complexity of the mechanical equations per se justify the use of probability.
13
Chance and Necessity: Hegel’s Epistemological Vision
The essentially fortuitous interaction of thermodynamic processes starts with two random
gaseous molecules (or better, with one, because there are collisions between molecule and
container) and simultaneously the matter of this container consists of corpuscles, also the
process is facilitated by heat variation. Instead, a pure system needs no trace of chance, and
may well need the complexity of large numbers: think of the famous problem of n bodies
in gravitational interaction, the solution of which is yet to be found. It would be absurd to
suggest the help of probability theory, when the matter precisely excludes any contingency
and any autonomy.
5 The Concepts of Chance and Random Process
The concept of chance is very complex and structured by major bipolar oppositions, i.e.,
necessity-contingency and independence-interdependence, the latter constituting the autonomous coexistence and secondary bipolar oppositions: order–disorder, general-individual.
The bipolar parameters of the concept of chance lend themselves to mathematical frameworks which can be used to judge whether an event is a chance event, e.g., weather/climate phenomena. Are events only ever extremes at the poles, or like Schrodinger’s cat,
can they broach opposites? It is evident from Piaget’s studies, that probablistic thinking-so
important in today’s world-is sadly lacking in the wider population. When bad things happen, many people tend to adopt inadequate reasoning based on poorly structured notions
of chance or apportioning blame to someone or something. There are of course education
programs designed to facilitate probabilistic and scientific thinking such as the Cognitive
Acceleration Through Science Education project by Philip Adey and Michael Shayer in the
1990s through 2000s. Yet in the general population causal analysis remains in short supply
(Kahneman, 2012) as is evident in the rise in populism (Mudde, and Rovira Kaltwasser,
2012). Events that arise in experience are due to an interaction that can be from interdependent phenomena, and then it is necessary, general and orderly, or it can be from chance
and then it is contingent, individual and disorderly. The result of independence is fortuitous
interaction, as every interaction is an expression of interdependence. Then.
Hypothesis 8 The external conditions from the environment may be part of the interaction
or not, but it is an interaction of the process with its external environment.
Hypothesis 9 The concept of chance is necessary for autonomy.
This important point needs clarification. How could we deny autonomous coexistence,
if the macroscopic matter, the microphysical and biotic matter are the same structure?
Structure can be described as insular. Looking at the sky, we see the galaxies, and within
them, the stars, islets of concentrated matter. If we look inside the bodies we see quantities of microphysical corpuscles, quarks, elementary particles, atoms, molecules, etc. They
are islets separated by a "vacuum" space, often enormous in scale. Within the same atom
we have the same insular configuration, with many more "empty" and "full" spaces. As
for living matter, species consist of isolated individuals not only in space but also, within
time, and with death across time, and multicellular individuals consisting of cells that still
enjoy a significant degree of autonomy. Humanity itself consists of autonomous societies,
especially nations. If there is usually interdependence between these "islands", roughly
stated, however, there is also a lot of independence. What is the impact on the Earth of a
giant explosion, occurring in an unknown star, within a galaxy invisible to the telescope?
13
J. Nescolarde-Selva et al.
In our galaxy, we know that the internal evolution of a star continues irrespective of other
stars, including neighbors. The autonomy of the corpuscular movements within macroscopic bodies constitutes the essence of thermodynamic processes. Likewise, the disordered nature of the movements of electrons within an atom, as expressed in quantum
mechanics, expresses the autonomy of those electrons, both mutually and relative to atomic
nuclei. Nobody rejects the autonomy of an individual animal or a human being, or even of
an "independent" nation. Within humankind, we can wonder about the interdependence
between recent contemporary events such as Islamist attacks in Paris, the Venezuelan crisis, space stations and the fires in Australia?
Hypothesis 10 The independent physical, biological, etc. processes are produced by isolated autonomous structures, that is, this independence is only the manifestation of that
structure.
Consequence 1: Every law that refers to permanence and constancy is considering
independence.
Thus, the heating, solidification, vaporization, etc. of the water, does not alter the integrity of any H2O molecule, which reveals the independence of any particular change. All
physico-chemical transformations known before radioactivity and many other unknowns,
left intact atomic nuclei: these are independent of nearly all mutations that affect their electronic structure. In other words, chemical metamorphosis of water, and the purely physical are autonomous, also nuclear reactions on one side, and almost all other physical and
chemical processes on the other. We can give many other examples. The gas constant is
independent of its chemical nature; diamagnetism is not dependent on temperature, etc.
The independence of physical processes is asserted in the theory of relativity of Einstein,
i.e., space and time. Indeed, if two events separated by a distance d are asynchronous and
are separated by a period t, in the case that d2 − c2 t2 is positive, any interaction between the
two events is impossible. It is only if it is not positive that coexistence is possible, but not
the necessity of interdependence.
However, recognizing autonomous coexistence affirms the random.
Hypothesis 11 Autonomous processes form well-defined systems of determined needs
that are at least partly independent.
But then determined necessity is engendering a field of defined compossibility, that is, a
determined contingency, and through these fields, two individual independent processes, if
they properly coexist enter into interdependence according to contingency.
Hypothesis 12 Because autonomy belongs to the Universe itself, chance results are
objective, and real and because of independence and contingency, perfectly rational or
epistemological.
As autonomy is unperceived, scientific processes innately lead to structures in autonomous
coexistence. These structures assume the importance of cosmic necessity, in which chance
is revealed as necessary (Hegel, 2010).
13
Chance and Necessity: Hegel’s Epistemological Vision
Hypothesis 13 From the point of view of any determined necessity or particular necessity,
chance is contingent, but from the point of view of cosmic necessity, chance is necessary.
So chance contains, in the center of its concept, the very contradiction (coincidentia oppositorum), being simultaneously contingent and non-contingent (Nescolarde-Selva, UsóDoménech and K. Alonso-Stenberg, 2015; Usó-Doménech et al., 2014, 2015). We have
also seen that it constitutes a caused interdependence implied by independence and we will
see that if there is disorder (law of Large Numbers), this repeated disorder is an order.
We do not speak lightly of the concept of chance. Everything that is real is rational
(Hegel, 2010): why not leave out the chance event, but if it escapes there is a thought that
does not cover the totality, and operates according to Aristotelian logic and therefore is
unable to conceive of the contradiction. Another different logic is necessary, to show us
the intimate connection between contingency and necessity, and make "pure contingency"
appear and show how the neopositivists’ hypothesis about narrow ideas are far from the
reality. It is because of autonomy that there is the network of relentless mechanistic necessities that do not fill everything but leaves gaps in which random interactions unfold. Nevertheless, it is displayed under the aegis of the limitation and control of necessity.
The concept of chance has been used to complement many concepts and removing it
or cutting it out will lead to error. Thus, contingency without objective independence, and
objective independence without contingency, or even more, disorder without contingency,
does not constitute chance.
Let A be the characteristics of a phenomenon or fortuitous process φ. If A is necessary,
the attributed probability is 1; if A is impossible, the probability is 0; if A is contingent, to
share it with others coincidences, the probability is given as a fraction of the unit. Then:
Hypothesis 14 The theory quantifies possibility as if it was a simple quality, and probability is nothing but the quantum of possibility.3
The field of compossibilities or set of coincidences C (sample space) of a fortuitous phenomenon φ is:
{
}
C = A1 , A2 , ..., An
(6)
pointing out that the realization of causality excludes any other cause. Determined by the
needs of the process, C has probability 1.
Let Pi be the probability of Ai, i = 1, 2,…., n, it is the partition of the unipossibility of
C, between the various coincidences that translates the usual statement of total probability
p1 + p2 + ... + pn = 1
The pi form a set P analogue of C such that:
{
}
P = p1 , p2 , ..., pn
(7)
(8)
3
Note that 1 and 0 denote necessity, while all other numbers from the set [0,1] denote the contingency. The
mathematical opposition of 0 and 1 with other numbers, expresses a modal categorical opposition.
13
J. Nescolarde-Selva et al.
Hypothesis 15 Set C expresses the qualitative content of compossibility, and the set P its
quantitative content.
Turning now to duplication of φ: it forms a random phenomenon φ2. Its field of compossibility C2, based on the case with maximum range is determined by the rules of combinatorial algebra, so C2 = C2 it is:
{
}
C2 = A1 A1 , A1 A2 , ..., A1 An ; A2 A1 , A2 A2 , ..., A2 An ;...; An A1 , An A2 , ..., An An
(9)
In total, there are n2 coincidences, instead of n. What chance do we attribute to them? Their
sum must also equal 1, because C2 is just as necessary as C, and its set P2 is to be analogous to C2. The only way to meet these basic conditions is to make P2 = P2 , that is:
{
}
P2 = p1 p1 , p1 p2 , ..., p1 pn ;p2 p1 , p2 p2 , ..., p2 pn ;...;pn p1 , pn p2 , ..., pn pn
(10)
(
)2
Then the sum of the elements of P2 is p1 + p2 + ... + pn = 1. From this we can see the
reason for the multiplication of the probabilities of a phenomenon φ is two times two, to
obtain the duplication of φ2. Then, the negative character of the "axiomatic method" is
revealed and Kolmogorov axioms are the most used for this purpose.4
How are the probabilities determined, i.e., in the set P? How are they determined out of
necessity in C? In abiotic nature they are determined on the basis of principles, laws and
physical theorems. If we make the dice game using perfectly cubic dice, under the principle of symmetry,5 the six faces must be equipossible, in the same way they have to have the
same area.
Each contingent causality Ai of a random process has a probability, or rather, a degree
of possibility that is a necessary feature of the process. When the field of compossibility is
quantitative, when it is a random variable x, the only possible values are part of the elements xi of C:
4
Let Qi denote anything subject to weighting by a normalized linear scheme of weights that sum to unity in
a set W . The Kolmogorov axioms state that (Feller, 1971):
( )
( )
For every Qi in W , there is a real number Q Qi (the Kolmogorov weight of Qi ) such that 0<Q Qi <1
. ( )
( )
2. Q Qi + Q Qi = 1 where Qi denotes the complement of Qi in W.
)
( )
( )
( )
(
3. For the mutually exclusive subsets Q1 , Q2, .... in W , Q Q1 ∪ Q2 ∪ Q3 ∪ ... = Q Q1 + Q Q2 + Q Q3 + .....
5
The Curie symmetry principle (Ismael, 1997) is the causality relation between the symmetry of the cause
and that of the effect. The principle is composed of three parts:
(1) If certain causes yield known effects, the symmetry elements of the causes should be contained in the
generated effects.
(2) If the known effects manifest certain dissymmetry (absence of symmetry elements), this latter should
be contained in the causes which have generated those effects.
(3) The converse to these two previous propositions is not true, at least in practical: i.e., the effects may
have higher symmetry than the causes which generate these effects.
1.
13
Chance and Necessity: Hegel’s Epistemological Vision
{
}
x ∈ C = x1 , x2 , ..., xn
(11)
xm = p1 x1 + p2 x2 + ... + pn xn
(12)
the simple probabilistic average
is also a necessary feature of random processes and in the same way of all averages.
We arrive at the main part of probability theory: the Law of Large Numbers.6 Reproduction in N copies of the same phenomenon φ (sets C and P retain their identity) constitutes,
like duplication, a new fortuitous phenomenon φN. When new magnitudes emerge, first is
N
the statistical frequency fi of the chance of Ai: if it has been made Ni times, fi = Ni .
Hypothesis 16 fi is a frequency of realization of existence of Ai, within the coexistence of
N copies of φ.
Consequence 2: While the probability pi is a degree of possibility, fi represents a degree
of existence and reality.
In addition, the other oppositional modal category exists relating pi and fi.
Consequence 3: pi is a necessary feature within φ and fi a contingent chance, a random
variable within φN.
There is same opposition (being a quantitative field compossibility) between the average
probability xm, and the statistical average xS = f1 x1 + f2 x2 + ... + fn xn, the first rational and
necessary, the second existential and random.
Theorems gathered under the Law of Large Numbers follow from what we have just
said about random processes and determine the limits as N increases indefinitely from random magnitudes. Now the main theorem states that the limit of the frequency fi is none
other than the probability pi, with the immediate results that the statistical average xS has as
limit the average probability xm.
While the number N of copies of φ cover a potential infinity, the statistical frequency of
each chance Ai of φ it tends to disorder, without having their consecutive values present no
law of succession to an actual infinite: the degree of possibility pi of Ai.
6
The Law of Large Numbers (Ross, 2009) says that in repeated independent trials with the same probability p of success in each trial, the chance that the percentage of successes differs from the probability p
by more than a fixed positive amount, e > 0, converges to zero as the number of trials n goes to infinity, for
every positive e. Note two things:
1. The difference between the number of successes and the number of trials times the chance of success in
each trial (the expected number of successes) tends to grow as the number of trials increases. (In fact,
this difference tends to grow like the square-root of the number of trials.).
2. Although the chance of a large difference between the percentage of successes and the chance of success gets smaller and smaller as n grows, nothing prevents the difference from being large in some
sequences of trials. The assumption that this difference always tends to zero, as opposed to this difference having a large probability of being arbitrarily close to zero, is the difference between the Law
of Large Numbers, which is a mathematical theorem, and the Empirical Law of Averages, which is an
assumption about how the world works that lies at the base of the Frequency Theory of probability.
3. The distribution of the number of successes in n independent trials with probability p of success in each
trial is Binomial, with parameters n and p.
13
J. Nescolarde-Selva et al.
Hypothesis 17 The probability indicates a new character, the transfinite of frequency;
through mathematical infinity, and the degree of reality (accomplished or existing) is identified with the degree of possibility.
But the qualitative change that usually promotes all transfinite transactions, is more radical in this case (Usó-Doménech, Nescolarde- Selva, Belmonte-Requena and Segura-Abad,
2017):
Hypothesis 18 There is a double change of category, the return of the existence to the possibility and the return of the contingency to necessity.
We have seen that the two basic sets, C and P, expressing the contingency of random
phenomenon φ, determines the necessity for this phenomenon. When it is it developed the
potential infinite of reproduction of φ, opened by extension of the same concept (like all
concepts when the individual is included) we see that necessity emerges from the amount
of chances and poses a limit to the contingency and disorder of the statistics’ magnitudes.
The literature on the use of probability theory presents the elements of the concept of
probability in a scattered way (sets C and P, Law of Large Numbers), and does not have the
concept itself structured by categorical opposition. Such literature contains only inarticulate scraps of thought, because the development of an applicable formality does not matter.
With Statistics, the confrontation of probabilistic concept is operated with experience. Frequencies, statistical averages are, indeed, immediately accessible to the practical calculation. They are not, at first glance, their probabilistic counterparts. Nevertheless, here also
the reduction of the potential infinite plays a role: when the number N of copies of the random process becomes very large, differences such as ||pi − fi || or ||xm − xS || between probabilistic magnitudes, transfinite from rational knowledge and statistics magnitudes known
empirically, become so small that any perception, calculation or measurement, is impossible. It is demonstrated that the equalities fi = pi or xS = xm have a magnitude of about √1 .
N
Example 1: In Statistical Thermodynamics, the possible values of the oscillation energy of
a gaseous diatomic molecule such as CO, is fixed by Quantum Mechanics:
C=
{
}
e 3e 5e
ne
, , , ..., , ...
2 2 2
2
Let n be an odd integer and e a constant characteristic quantum of the CO molecule. Their
respective probabilities are determined by the Gibbs-Boltzmann theorem (Carter, 2001),
i.e.:
pn = ae
(
ne
− 2kT
)
With T being the absolute temperature of gas, k a universal constant and a is a constant. If
E designates the energy of molecular oscillation, then:
e
e
Em = ( e
)+
2
e kT − 1
and this relationship is necessary.
13
Chance and Necessity: Hegel’s Epistemological Vision
In a diatomic gas containing 1020 molecules, it means that there are 1020 copies of a
movement, and with regard to energy oscillation, equality will be true to within 10−10.
However, such precision can reach anyone. Physical corpuscles micro populate such an
extraordinary degree that non-biotic matter is precisely what gives rise to the application of
the Law of Large Numbers with the most excellent precision.
6 Ontological Necessity of the Probabilistic Laws
When speaking of "statistical laws" or "statistical regularities", it is necessary to understand a particular experimental shortening of the theoretical and abstract infinite, which
requires the Law of Large Numbers, in which there is a rough identification of statistical
and probabilistic magnitudes. The philosophical neopositivist denies or disparages the
need for statistical laws, which would be better referred to as probabilistic laws, since
necessity properly belongs to the transfinite, i.e., to probability. Nevertheless, in terms
of confronting experience, a probabilistic theory and a theory of necessity are conceptual reconstructions of the object, and so regarding potential infinite in each theory,
reality makes the rejection concrete.
Example 2: It is not impossible that gas molecules occupy half of a bowl and leave
the other half empty: it would be a chance result of their autonomous movements, but in
the middle of 2 N equipossible coincidences if there are N molecules, this means having
a degree of possibility 21N , that is a number fantastically small. Among p = 0 (impossible)
and this value, the difference is very small but not zero. In addition, the distribution of
molecules between the two halves of the container is not fixed, but changes continuously and rapidly. The statistic is exercised over a multitude of these successive distributions, which are being followed without order. Now, by definition, a thermodynamic
process is microscopic, that is, it is inherent to the whole of an enormous number of
corpuscles. How can we observe the gas using our senses and measuring devices? A
manometer, with the temperature being fixed indirectly, measures the number of molecules per volume unit present in their environment. However, it cannot follow the rapid
and disorderly variations of these molecules; it only provides a constant average, which
is the probabilistic average because the statistical average is not constant, and its small
variations cannot be recorded by the device. Thus, the magnitude is part of traditional
phenomenological thermodynamics and experimentally identified with the probabilistic
averages (necessary) established by Statistical Thermodynamics. During thermal processes those averages vary constantly obeying laws (necessary) and the averages that
remain constant at equilibrium. As regards the gas, at equilibrium, the average number
of molecules present inside the half of the container is N2 and is invariable. In the case of
Carbon dioxide, a gas denser than air, the gas may indeed settle in the bottom half of a
container if enough gas to half fill a container is used-so care is needed. If the container
is closed and only contains CO2, then Brownian motion applies and an even spread of
gas molecules exists.
Conclusion: The observed phenomenon of spontaneous gathering of all molecules
within half of the container, although for no more than 1/1000 of a second, is absolutely
and logically impossible. But it remains a possibility which belongs to autonomous
microphysical movements, and this molecular assembly is due to their large number
(formally infinite), and a possibility that we cannot consider at the level of macroscopic
thermodynamic phenomena. In addition, this is a precise example of the domination
13
J. Nescolarde-Selva et al.
of necessity over chance, showing the phenomenal emergence of necessity over a multitude of microphysical processes, which are both fortuitous and necessary. In addition and in this way an equilibrium at rest emerges with the macroscopic gas equilibrium being established over a multitude of imperceptible movements. It is necessary
that these microphysical possibilities are taken into account in the average, and it is by
means of this, and only thus, that makes its paper trace, which in our case is negligible.
In the case of Carbon dioxide, a gas denser than air, the gas may indeed settle in the bottom half of a container is enough gas to half fill a container is used-so be careful here.
If the container is closed and only contains CO2, then Brownian motion applies and an
even spread of gas molecules exists.
Let us see another common point between probabilistic theory and the theory of necessity. When we measure, we find in both theories what is metrologically fortuitous. Expelled
from the rational construction of necessity, chance reappears in the quantitative experimental verification of construction.
In thermodynamics this observable random is incomparably larger than the macroscopic
residual random, and different from the probable statistical average. The macroscopic random in the face of the microscopic random is almost without trace, and in all cases is so
small that it was not experimentally distinguished. However, the necessity expressed in
non-probabilistic physical theories, and the necessity that, by the Law of Large Numbers is
affirmed by probabilistic theory, are shown to be identical. Both necessities are tested and
rejected by experience and with the same degree of accuracy.
For Monod (1971), the biosphere, including humanity, is purely and wholly contingent.
The entire generation of living matter from nonliving and from certain macromolecules
comes, according to this author, from pure contingency. Regarding microphysical random
perturbations, and of foreign influences on the cell, which alter the DNA and can cause
a general mutation, Monod had seen this encounter mostly as between two sets of very
independent events. The idea would be fair without the word "very", the significance of
which is that these disturbances manifest pure chance, only chance and the blindest absolute freedom at the root of the prodigious edifice of evolution. The living being although is
closed and protected against certain environmental stimuli, while remaining open to other
stimuli, because without them, without metabolism, there will be no life. Moreover, as this
environment of organic matter is constantly crossed by penetrating microphysical corpuscles caused by cosmic rays and terrestrial radioactivity made of atoms that cannot stop, and
every molecule may be disturbed by them. There is undoubtedly a cosmic necessity of random DNA alterations. Proof of such necessity is that within a population of a few billions
of bacteria, there are one hundred thousand to one million mutants. Quantitatively that
number is displayed as random, but their existence and order of magnitude are clearly necessary, and its randomness is attenuated to almost disappear when the population becomes
very large. Monod explains that this is within the realm of the most implacable necessity,
because this is at the scale of the organism, which is macroscopic, and the population on
which selection pressure is exerted, and which retains only a fraction of the astronomical
number of coincidences that the roulette of Nature offers. It is such a necessity, for Monod,
that provides the governing guidelines for evolution. The domination of necessity over
chance in evolution cannot be expressed in more relevant terms.
13
Chance and Necessity: Hegel’s Epistemological Vision
7 Ontological and Epistemological Indeterminism
The problem of determinism versus indeterminism or chance has implications that go
beyond physical approaches. It involves philosophical problems concerning man’s understanding: we refer to the concept of free will, over which much ink has been used since the
time of Ancient Greece. Moreover, it is not a trivial problem. Economic and social theories
are revolving around this concept. Epistemologies are linked around the belief of free will
and, of course, the whole Semiosphere is involved.7 To address the problem we will need
to understand clearly the difference between determinism and indeterminism.
For James (2010), the central idea is that determinism is valid if there is no portion
of reality containing all information concerning globality. We are here with the associated problem of local and non-local. That is, each event contains information that is not
encoded anywhere else. This means that there is not any nontrivial algorithm from which
to calculate the entire universe, using all the information on any proper subset of the universe. From a subset belonging to reality, it is not possible to construct an algorithm that
can generate the whole of ontological reality. Specifically, there is no equation that can be
inserted into a computer to give rise to all the wave functions of the universe, using the
values of the wave functions of the universe, or using the values of the wave functions of
its domain. Moreover, this is also fulfilled locally. This indeterminacy is a property of all
those quantum cosmological theories for which the wave functions of the universe in its
domain includes the set of all four-dimensional compact manifolds. Therefore, indeterminism is true in both the quantum cosmological theory of Hawking-Hartle (Halliwell, 1991)
and the more modern and widely discussed Omega Point theory (Tipler, 1994). However,
perhaps in the case of the Hawking-Hartle cosmology there is a case for an epistemological
and not an ontological indeterminacy. Then:
(a)
By epistemological indeterminacy, we understand that indeterminism is in human
knowledge, but not in ontological reality. It is a semiotic indeterminacy.
(b) By ontological indeterminacy, we understand indeterminacy to be irreducibly in ontological reality. This variety of indeterminacy is in the same universe and has nothing
to do with the knowledge that the subject has concerning reality. It is a non-semiotic
indeterminacy.
James (2010) in referring to determinism, is talking about ontological determinacy. The
confusion between ontological and epistemological has led to confusion, since there are
many authors who maintain that the universe is epistemologically undeterminate, and from
it they deduce that it is also ontologically indeterminate. This is not true: classical chaos
implies that the classical universe is deterministic ontologically but epistemologically indeterministic. This is because the equations that govern it are deterministic. That is, although
no one could predict what would happen in the long term, despite the precision reached in
7
The semiosphere is the semiotic space outside of which semiosis cannot exist. The ensemble of semiotic
formations functionally precedes singular isolated language and becomes a condition for the existence of
the latter. Without the semiosphere, language not only does not function it does not exist. Organisms create
the signs which become the constituent parts of the semiosphere. This is not an adaptation to the existing
environment, but the continuous creation of a new environment. Kull (1998) believes that it is possible to
accept Hoffmeyer’s view (1996) as an analogy to the concept of an ecological niche as it is traditionally
used in biology, so community develops according to the semiotic understanding of the processes.
13
J. Nescolarde-Selva et al.
the initial data, each of our actions would be strictly determined. In an ontological reality
conceived in this way, the human being would be inextricably linked to his future.
The argument in favor of determinism is the “No Classification of Four-dimensional
Manifolds Theorem” (Freedman and Lou, 1989) which tells us that there is no algorithm that can list and classify all topological or differentiable compact four-dimensional manifolds that lack edges. Furthermore, there is no general type algorithm that
can distinguish two varieties, in particular if they are different or the same. Here is a
previously discussed principle of Leibniz. Therefore, from the value of the wave function of the universe on any given region of a certain dimensional range, it is not possible to generate, through any operative procedure, the rest of the wave function. Indeed,
the wave function would be different if the topologies also were different, but one cannot tell if two topologies are different (Principle of Identity of Indiscernibles). There
is a calculable function at every point. If one could notice the difference between two
different wave functions that means that there is no function encompassing the wave
functions. Locally, expressed the view that, given the wave function on some open
environment U on the same dimensional range, there can be no universal equation or
algorithm to define on U’, a unique extension of the wave function given on U, where
U’ is a major region of that U. To demonstrate this, we take into account that can U’
be modified using the artifice of drawing a ball of U’ – U and identifying the boundary
thus created, with the border produced by cutting a similar ball from any other manifold. Since there is no general type of algorithm that establishes the identity of manifolds, there is no algorithm that indicates whether this has been done.
We must make the following distinction:
By a calculable equation, we mean a finite difference equation, or other equation
that can be solved with a finite number of steps, and by non-calculable equation is
meant otherwise. That is, there is no difference between, the universe being governed
by a non-calculable equation or that there is no equation. This hypothetical reaction
would be the wave function of the universe. To express an equation for the wave function of the universe in a way that is meaningful mathematically, we should first express
the coordinate system in which a four-dimensional metric would settle all four-dimensional manifolds. However, no classification theorem implies that it is not possible to
give these coordinate systems. If all these coordinate systems could be classified, then,
using the same method one could classify all manifolds, which is impossible. Then,
since there is no way to write the coordinate systems that cover the domain of the wave
function of the universe, one cannot write an equation for the wave function. However,
this does not mean it does not exist. The nonexistence of the equation of the universal
wave function is equivalent to the impossibility of expressing it by a finite number of
symbols, and has meaning from the mathematical point of view.
Indeterminism in the theory of quantum gravity is different by nature compared to
the indeterminacy that arises in non-relativistic quantum physics. In the latter case,
the Schrödinger equation is calculable if the wave function is restricted to a compact
region for the time evolution of the wave function, hence indeterminism does not arise
in relativistic quantum mechanics due to our limited way of observing the world. On
the contrary, indeterminism occurs in quantum gravity, it is immovable, as much ontologically and in its foundation. It comes from Gödel’s incompleteness theorem (Boolos and Jeffrey, 1974). Despite this, one should not forget that the most basic level of
reality is not made up of manifolds of three or four dimensions. The current theory
of gravitation states that it is, but it may well be wrong. If wrong, then it may be that
the most fundamental constituent level of reality comes by describing much simpler
13
Chance and Necessity: Hegel’s Epistemological Vision
- - - - Purely logical link (epistemological),
______ Link also ontological
N necessary, C contingent, G general, I individual, O ordered, D disordered.
Fig. 2 The concept of chance
mathematical expressions that mathematics relates to manifolds. If they were simple
enough, then Gödel’s theorem cannot be applied, which could return determinism. If
there are successful current theories of quantum gravity, there is indeterminacy of the
origin, in the sense used by James, in each spatio-temporal region.
8 Conclusions
We summarize the concept of random proposed in the following scheme (Fig. 2).
In fact, chance and indeterminism are nothing but a disorderly efficiency of contingency in the production of events, phenomena, processes, i.e., in its causality, in the
broadest sense of the word. Such production may be observed in natural processes
(mountains, rivers, etc., on Earth) or artificial processes in factories, laboratories, or in
human social processes (in history, economics, society, politics, etc.). Here we touch the
object par excellence of all scientific research whether natural or human. The impotence
13
J. Nescolarde-Selva et al.
of the natural sciences is a major problem due undoubtedly to the refusal to think, to
challenge positivist philosophy and the naive illusion that Science and Philosophy can
be separated by a sealed wall. It is necessary, therefore, to resuming the problem of
chance and necessity at the beginning.
Acknowledgements The authors appreciate the contributions made by the reviewers with the aim of improving
the quality of the article, especially those provided by "reviewer 2".
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
Barrow, J. D., & Tipler, F. J. (1986). The anthropic cosmological principle. Oxford University Press.
Boolos, G., & Tyler, J. (1974). Computability and logic. Cambridge University Press.
Brown, G. (1987). Compossibility harmony, and perfection in leibniz. The Philosophical Review, 96(2),
173–203.
Carter, A. H. (2001). Classical and statistical thermodynamics. Prentice-Hall Inc.
Cournot, A.A., Robinet, A. & Bru, B. 1984. Exposition de la théorie des chances et des probabilités. Oeuvres
complètes. Cournot 1. Bibliothèque des textes philosophiques Paris: J. Vrin.
Feller, W. (1971). An introduction to probability theory and its applications. Wiley.
Freedman, M. H., & Luo, F. (1989). Selected applications of geometry to low-dimensional topology. American
Mathematical Society.
Halliwell, J.J. 1991. Introductory Lectures on Quantum Cosmology. In Proceedings of the Jerusalem Winter School on Quantum Cosmology and Baby Universes, edited by S.Coleman, J.B.Hartle, T.Piran and
S.Weinberg (World Scientific, Singapore, 1991).
Hegel, G. W. F. (2010). Science of logic. Cambridge University Press.
Hoffmeyer, J. (1996). Signs of meaning in the universe. Indiana University Press.
Ismael, J. (1997). Curie’s principle. Synthese, 110, 167–190.
James, W. (2010). The dilemma of determinism. Kessinger Publishing.
Kahneman, D. (2012). Thinking, fast and slow. Penguin.
Kull, K. (1998). On semiosis, umwelt, and semiosphere. Semiotica, 120(3/4), 299–310.
Landau, L. D. & Lifshitz, E. M. 2011. Statistical Physics. 3rd Edition Part 1. Course of Theoretical Physics,
Volume 5. Theoretical Physics course. Translated from Russian by J.B. Sykes and M.J. Kearsley, Oxford.
Elsevier.
Leibniz G.W. 1991. La Monadologie. Edition établie par E. Boutroux, Paris LGF. (In French).
Monod, J. (1971). Chance and necessity: An essay on the natural philosophy of modern biology. Knopf.
Mudde, C., & Rovira, K. C. (2012). Populism in Europe and the Americas: Threat or corrective for democracy?
CambridgeUniversity Press.
Nescolarde-Selva, J., & Usó-Doménech, J. L. (2012). An introduction to Alysidal Algebra III. Kybernetes,
41(10), 1638–1649.
13
Chance and Necessity: Hegel’s Epistemological Vision
Nescolarde-Selva, J., Usó-Doménech, J. L., & Sabán, M. J. (2015a). Linguistic knowledge of reality: A metaphysical impossibility? Foundations of Science, 20(1), 27–58.
Nescolarde-Selva, J., Usó-Doménech, J. L., Alonso-Stenberg, K. 2015b. Chapter 6: An Approach to Paraconsistent Multivalued Logic: Evaluation by Complex Truth Values. In New Directions in Paraconsistent
Logic. Kolkata, India. Springer, pp. 147–163.
Parzen, E. (1960). Modern probability theory and its applications. John Wiley and Sons.
Ross, Sh. (2009). A first course in probability (8th ed.). Prentice Hall press.
Tipler, F. J. (1994). The physics of immortality: modern cosmology, god and the resurrection of the dead.
Doubleday.
Usó-Domènech, J.L. & Nescolarde-Selva, J.A. 2012. Mathematic and Semiotic Theory of Ideological Systems.
A systemic vision of the Beliefs. LAP LAMBERT Academic Publishing. Saarbrücken.
Usó-Doménech, J. L., Nescolarde-Selva, J. A., & Pérez-Gonzaga, S. (2014). Truth values in t-norm based systems many-valued fuzzy logic. American Journal of Systems and Software, 2(6), 139–145.
Usó-Doménech, J. L., Nescolarde-Selva, J., Pérez-Gonzaga, S., & Sabán, M. J. (2015). Paraconsistent multivalued logic and coincidentia oppositorum: Evaluation with complex numbers. American Journal of Systems
and Software, 1(3), 1–12.
Usó-Doménech, J. L., Nescolarde-Selva, J., & Belmonte-Requena, M. (2016). Mathematics, philosophic and
semantical considerations on infinity (I): General concepts. Foundations of Science, 21(4), 615–630.
Usó-Doménech, J. L., Nescolarde-Selva, J., Belmonte-Requena, M., & Segura-Abad, L. (2017). Mathematics,
philosophical and semantic considerations on infinity (II): dialectical vision. Foundations of Science, 22,
655–674.
Usó-Doménech, J. L., Nescolarde-Selva, J. A., Segura-Abad, L., & Sabán, M. (2019). Dialectical logic for
mythical and mystical superstructural systems. Kybernetes, 48(8), 1653–1680.
Usó-Doménech, J. L., Nescolarde-Selva, J. A., & Gash, H. (2022). Epistemological considerations about the
mathematical concepts. Kybernetes, 53(1), 95–115.
J. A. Nescolarde‑Selva graduated in Mathematics from the University of Havana (Cuba) in 1999. He won
the Gold Medal in Mathematics at the University of Havana, Cuba in 1999. He received his Ph.D. degree
in Applied Mathematics at the University of Alicante (Spain) in 2010. Since 2002, he has been working in
the Department of Applied Mathematics, University of Alicante, Spain. In 2016, was appointed as visiting
professor at the Northeast Normal University (Changchun, China). He is the author and co-author of several
papers in journals and books. His research is devoted to the Theory of Systems and Complex Systems.
J. L. Usó‑Doménech graduated in Sciences in 1968 and with a Doctorate in Mathematics at the University
of Valencia (Spain) in 1991. Since 1991, he has been working in the Department Mathematics, University of
Castellon, Spain. His doctoral thesis developed a mathematical model of a terrestrial Mediterranean Ecosystem. He is the author and co-author of more than a hundred papers in journals, proceedings, and books and
has directed many doctoral theses in Systems Theory. He was awarded research scholarships from several
universities including the University of Joensuu, Finland; Universidad de La Plata Argentina Universidad
Tecnológica Metropolitana de Santiago, Chile, University of Ben Gurion of Negev, Israel and the University
of Georgia, USA. He has collaborated in numerous occasions with the Wessex Institute of Tecnology (UK)
and a result of this collaboration was the creation of the international congress ECOSUD. He is currently
retired.
H. Gash graduated from Trinity College Dublin with a degree in psychology and philosophy in 1969 and
obtained his doctorate in educational psychology at State University at Bufalo in 1974. He was a postdoctoral researcher at the University of Georgia with Charles Smock and then worked at St. Patrick’s College
Dublin until 2010. He has published extensively on educational applications of constructivism, details on
his work are shown on https://sites.google.com/dcu.ie/hughgashwebpage. He was awarded the title of Emeritus Associate Professor by DCU Governing Body in June 2017. He is a Fellow of the Irish Psychological
Society and an Emeritus Member of the Society for Research in Child Development. He is a member of the
board of the International Institute for Advanced Studies in Systems Research and Cybernetics.
13