Gottfried Wilhelm Leibniz, who holds the distinction of being the only philosopher a biscuit was named after, once expressed the hope that every philosophical dispute may be settled by calculation:
"The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right." [1]
It is not precisely clear whether Leibniz intended for the ratiocinator to be an actual machine -- after all, Leibniz was one of the pioneers of mechanical calculation machines with the construction of the Stepped Reckoner --, or merely an abstract calculus, a forerunner to modern symbolic logic -- whether it was software or hardware, so to speak.
For present purposes, however, the distinction is somewhat immaterial, and we can consider the ratiocinator in the way Hartley Rogers did, as
But rather than dwell on precisely what this means, I'll just tell you the punchline: thanks to pioneering work in logic and the theory of computation undertaken mainly in the first quarter of the 20th century, and most notably through the works of Kurt Gödel and Alan Turing, we now know that this can't work. In general, for any sufficiently powerful formal system, there exists no algorithmic procedure such that it can tell true from false statements in a finite amount of time.
It is this failure of Leibniz' dream that this blog will be most concerned with, though often in a hidden and somewhat roundabout way. This failure, far from being a crushing defeat to the search for reason and comprehensibility in the universe, actually opened up the door to some of the most fascinating developments and ideas of the past 100 years -- and perhaps, of the entirety of human history.
I am not merely talking here about things like the celebrated incompleteness theorem, due to Gödel, though that is certainly a big part of it, but rather of the more general cluster of phenomena centered roughly around notions of self-reference, such as computational universality, self-organizing systems, and criticality -- phenomena that occur along the so-called 'edge of chaos', the fine line that separates systems that are boring because their behaviour is simple, repetitive and predictable, from systems that are boring because their behaviour is random and patternless. It's along this line that most interesting things happen.
A very simple example, and by far not the most interesting one, is furnished by the behaviour of so-called cellular automata. A (one dimensional) cellular automaton is a kind of game -- a zero-player game that 'plays itself' -- that consists of a row of cells, each of which can be either on (usually denoted by painting the cell black) or off (leaving the cell white). Based on the pattern of on and off cells of the row and a simple rule characteristic of the automaton, the next line is calculated, then the next one from that, and so on. An example for a possible kind of rule would be that a cell is black if in the previous step either of its immediate neighbours was black; otherwise, it is left white.
There are 23 = 8 possible patterns a cell and its immediate neighbours can produce (writing 1 for black and 0 for white, these are 000, 001, 010, 011, 100, 101, 110, 111), and since each rule assigns to each pattern either a black or a white cell in the following line, there are 28 = 256 different elementary cellular automata. Thus, a number between 0 and 255 uniquely identifies each such automaton. This number is called its Wolfram code, after physicist and computer scientist Stephen Wolfram; much (much...) more about cellular automata and similar systems can be found in [3].
A typical example of a cellular automaton's evolution is this (whose Wolfram code is 50) (all CA pictures were generated with Wolfram|Alpha, by the way -- just type in 'rule' followed by a number if you fancy some experimentation):
Starting from random initial conditions, i.e. a random assignment of black and white cells, very quickly a boring pattern emerges: there is no point in tracing its evolution any further, nothing 'new' is going to happen.
At the other end of the spectrum, there are automata like this one (rule 45):
No apparent patterns emerge; the evolution is totally chaotic.
But between these extremes lies the edge of chaos, with automata like rule 110, which at first looks rather chaotic, like rule 45 above:
But, if we follow it for a little longer, it begins to exhibit unexpected features:
As can clearly be seen, certain localized structures emerge and persist, interacting occasionally. This automaton will neither repeat, nor decay into complete chaos -- rather, it will stay in that magical in-between zone, where interesting things happen. And in fact, in this case, there are more interesting things about rule 110 than meets the eye -- surprisingly, it is computationally universal, which means that one can use it to carry out any computation that can be carried out at all. Quite a feat for a simple coloring game!
Thus, the failure of Leibniz' dream is nothing to despair about -- in fact, had his dream proven realizable, we would live in a rather boring universe: everything would be predictable through rote calculation, through mindless manipulation of symbols. This way, at every corner, genuine novelty awaits; there is always potential for surprise and amazement in every area of study, or of life.
However, one must be careful not to rashly conclude that the universe is lawless, or that its fundamental nature (whatever that may mean) is unknowable. Indeed, the cellular automaton example blows such speculation out of the water: it is certainly a completely lawful, even deterministic, system, and the law it follows is simple enough. Yet nevertheless, certain questions about its evolution can't be answered using algorithmic methods of reasoning.
In the quest of explaining the phenomena of the natural work, one often encounters two seemingly contrary stances: those that have a certain yearning for order, for analysis, getting to the bottom of things and solving the mysteries of nature; and on the other hand those that feel a loss in tearing away the veil and exposing the magician's tricks, that revel in mystery, to whom the label 'inexplicable' is not a sign of defeat, but rather the reassuring glimpse of something beyond human reason. To the latter, the former often seem cold, sterile, and in discarding anything not measurable, they feel that everything that makes life worth living is thrown out; while the former often think of the latter as being somewhat naive, even delusional, and see a weakness in the need to 'sugarcoat' objective (and sometimes, perhaps, harsh) realities with mystery and magic.
The failure of Leibniz' dream does not mean victory for the latter group of people. Rather, it means something far more interesting: namely, that the perceived dichotomy between lawfulness and mystery simply doesn't exist! A system's fundamental dynamic may be completely known, and even boring, but nevertheless, new phenomena emerge everywhere, bringing with them all the surprise, amazement and wonder one could hope for (without being overly greedy). Granted, it's not obvious through looking at cellular automata -- it takes a special sort of person to be continually amazed by little black and white squares. But bear with me.
Dramatis Personae (Introducing the Main Characters)
You've probably noticed the (perhaps somewhat overwrought) artwork in the header of this blog. This isn't just something that I thought looks pretty, rather, it is composed of certain parts that have some connection with the things I plan to write about. They're characters in Leibniz' sense, if you will, forming our own mini-characteristica. Every new blog post will be prefaced by one or more of them, indicating what it is going to be about -- a sort of visual tag. First, for easy reference, here's the picture again, reduced to the functional parts:
We'll start on the left. The spherical object you see there is actually composed of several spheres on different levels, plus some more decoration. In no particular order:
Quantum Mechanics: This is a representation of the state space of a quantum bit, or qubit for short, known as as Bloch sphere. A qubit, as the name indicates, is the quantum version of the fundamental unit of information, the bit (short for binary digit). Unlike its classical counterpart, it is not restrained to being sternly either 0 or 1 -- rather, it can enter into superpositions of both possibilities. Every point on the sphere's surface represents such a superposition. This capability is essentially what enables a quantum computer to accomplish certain tasks much faster than any classical one ever could. In a sense, the qubit can also be thought to stand for the most basic quantum mechanical system, a so-called 'two level'-system. No matter its actual physical realization, its state space will always be isomorphic to the picture. I will use this symbol to denote posts concerned chiefly with quantum mechanics.
Holographic Principle: On the next 'level', the sphere painted with (classical) bits represents the Bekenstein bound, or more generally, the holographic principle. Faced with the puzzle of where the information -- or more accurately, the entropy -- of matter falling into a black hole goes, Jakob Bekenstein made the quite surprising discovery that there is a maximum amount of entropy that can be 'stored' within a given part of space, and that black holes saturate this bound (they were previously thought to be rather low-entropy objects). Interestingly, that maximum amount turns out to be proportional to the area of the surface of the part of space, rather than to its volume. Thus, it was conjectured that the information about the matter that has fallen into a black hole is stored on the black hole's surface, its event horizon, much the same way three dimensional pictures are encoded on two dimensional surfaces using holography -- this is, essentially, the so-called holographic principle. I will use this icon whenever posts deal chiefly with holography, entropy, or things like the so-called AdS/CFT correspondence (an explicit realization of holography in string theory).
Information Theory: Moving on, the strange patterns on the sphere's surface, which look already somewhat familiar from the earlier pictures in this post, are indeed related to cellular automata, or, more specifically, a particular automaton known as Conway's Game of Life, or just Life among the cognoscenti. Unlike previous examples, Life is a two dimensional automaton; like rule 110 above, it is also computationally universal. The pictures are examples of a particular structure known as a glider: a pattern that, after a few steps of evolution, returns back to its original configuration, having moved itself a few steps across the grid in the process. Here's an animation of a glider in motion. A Java implementation of Life can be found here. This icon will mainly preface posts dealing with information theory and related subjects.
Particle Physics: This little graph, painted across the gliders, is (part of) a so-called Feynman diagram. Feynman diagrams are a pictorial way of representing elementary particle interactions -- the somewhat tongue-in-cheek implication here being that the gliders on the sphere could be seen as some kind of 'elementary particles' in a cellular automaton world. More specifically, this is the so-called QED vertex -- an electron comes in, emits a photon, and moves on. This is the basic interaction of the theory of quantum electrodynamics, which is the quantum theory accounting for all electromagnetic phenomena -- which includes almost all the phenomena we experience in our everyday lives (the phenomena related to gravity would be the other big category). This icon will stand more generally for quantum field theory and particle physics.
General Relativity/Spacetime: This picture is probably familiar to most -- a spherical mass, together with a schematic representation of its gravitational field. Thanks to Einstein and his general theory of relativity, we know that what we call the gravitational force really is an effect of the geometry of spacetime. As John Wheeler put it, "mass tells space-time how to curve, and space-time tells mass how to move". This icon will stand for general relativity in particular, but also for theories and considerations on the nature of spacetime in general.
Emergence: This picture, looking somewhat like a depiction of magnetic field lines, is actually a schematic representation of the momentum space topology of a condensed matter system known as a Fermi liquid (or to be more exact, a certain universality class thereof). Let's take this slowly: Fermions are elementary particles whose spin (roughly, a quantum mechanical analogue of angular momentum) can take only half-integral values, i.e. 1/2, 3/2, etc. They obey the so-called Pauli exclusion principle, which means that no two of them can be in the same state at the same time. Thus if you pack a bunch of them together, the available energy levels get filled gradually, such that below a certain energy, all levels are filled, and above, none will be. This boundary -- called the Fermi surface -- between occupied and non-occupied states is what the sphere in the picture represents.
Typically, the fermions in such a system will be coupled to each other, as in this system of mass points and springs. Each disturbance of the system will thus propagate, like ripples on a pond. These ripples again will have a certain energy, and it is their energy the 'field lines' symbolize. In some systems, the ripples will have an energy some fixed amount 'above' the Fermi energy boundary -- these are called fully gapped systems. However, more interesting things happen in systems like the one depicted, that exhibit so-called Fermi points, where the energy of the excitations vanishes. The story is a bit longer and more complicated, but as it turns out, the excitations in the vicinity of these Fermi points resemble very much the elementary particles of our universe! I will use this icon to stand for condensed matter physics specifically, or the notion of emergence in general.
Philosophy: Moving on to the other side, the first thing that leaps to attention is a silhouetted head with a blank where the brain ought to be. However, this should not be taken to suggest mindlessness! Quite to the contrary, the white space is more analogous to an empty thought bubble, the form of a thought, waiting to be filled with content. It is the state of mind before thought: ready and anticipating. This icon will stand for philosophy, and especially for the philosophy of mind.
Computation: This complicated apparatus is a part of Charles Babbage's analytical engine, the planned, but never completed, successor to his difference engine, an early automated calculating machine. Broader in scope and capacity than the difference engine, the analytical engine was the first machine to be computationally universal -- or rather, it would have been, had it ever been built. It was fully programmable -- in fact, a program to calculate Bernoulli numbers using the analytical engine, written by Lady Ada Lovelace, daughter of Lord Byron (yes, the first programmer was a woman), still survives. As the first computer, this icon will stand for computation, in theory or in practice.
Updates/Internal Matters: Finally, the inkblot represents -- well, an inkblot. Ink spilled from a carelessly handled quill. Nothing more to it.
Well, that's not quite true, perhaps. In fact, there's an interesting story about inkblots and Leibniz that I would be remiss not to relate here. In his Discours de métaphysique [4], Leibniz asks the reader to imagine a page that has been spattered with ink. He then notes that there is always a curve that passes through these points -- even though their distribution is completely random. It seems thus that merely having a mathematical description of something does not imply that it actually follows some rule. But then, how should one tell 'lawful' from 'lawless' systems?
Leibniz' resolution is roughly the following: if there is nothing that can be gained from describing the set of randomly distributed points through a mathematical law -- i.e. if the purported law itself is as complicated as the spatter-pattern -- then there is no merit in the mathematical description; the pattern is effectively random. This foreshadows important concepts that we will meet again on this blog, such as algorithmic information theory and Kolmogorov complexity.
For the moment, however, this particular inkblot is just an inkblot; I will use it simply to denote posts that feature updates, personal notes, or any other flotsam and jetsam that does not fit with the things I am planning to post, but nevertheless want to write about.
Dénouement
If all of this seemed a bit much, don't worry -- consider this post more as an extended table of contents; it's meant just to name, not to fully explain, the topics and themes this blog will revolve around. I hope what little explanation I gave was sufficient to convey at least a taste of things to come, so to speak.
Also, if this post appealed to you at all, then there's a good chance forthcoming ones may, too; I will naturally go more into the depth of the subjects in blog posts dealing with them explicitly, but on the whole, I plan to go neither overly technical, nor to distort the subject in an attempt to spoon feed you appropriate soundbites. This is a fine line to straddle -- it's a bit like the edge of chaos, again --, and I'm very open to any and all feedback, or further questions.
The main aim of this blog is to collect and present interesting ideas, some of which are just now developing at the vanguard of science (like the holographic principle), some of which already have been around for a while, sometimes longer than is generally believed (like computation), all with an eye towards exploring the common currents underlying them.
However, everything you read here should be regarded the same way you should regard stuff you read on the internet in general -- skepticism is always healthy, and especially warranted where my own views diverge from the mainstream, or I discuss my own ideas, which I will always strive to point out.
I'll be somewhat more 'formal' than most blogs in that I'm going to source things -- i.e. for certain claims or quotes, point out where they are taken from, as in the 'References' section below. This isn't done out of academic stiffness -- the reason is merely that if you consider something I write interesting, you won't have to dig around looking for resources to read more about it. Whenever possible, I'll include a link to texts available online.
I think that's about all I wanted to say in this introduction -- well, actually, it's rather more than I originally planned to say. I had not set out to write about cellular automata at all, for instance. This is characteristic of systems on the edge of chaos: they tend to develop their own momentum. Let's hope it will lead to interesting things.
References:
[1] Leibniz, Gottfried Wilhelm, The Art of Discovery 1685, Wiener 51
[2] Hartley Rogers, Jr. 1963, An Example in Mathematical Logic, The American Mathematical Monthly, Vol. 70, No. 9., pp. 929–945
[3] Wolfram, Stephen, A New Kind of Science. Wolfram Media, Inc., May 14, 2002. ISBN 1-57955-008-8
[4] Leibniz, Gottfried Wilhelm. Discourse on Metaphysics and the Monadology, 1686, parts V and VI; online text
For present purposes, however, the distinction is somewhat immaterial, and we can consider the ratiocinator in the way Hartley Rogers did, as
"an algorithm which, when applied to the symbols of any formula of the characteristica universalis, would determine whether or not that formula were true as a statement of science." [2]In modern terms, one would perhaps regard the characteristica as a kind of formal system, and the ratiocinator, understood as above, as a decision procedure for said system.
But rather than dwell on precisely what this means, I'll just tell you the punchline: thanks to pioneering work in logic and the theory of computation undertaken mainly in the first quarter of the 20th century, and most notably through the works of Kurt Gödel and Alan Turing, we now know that this can't work. In general, for any sufficiently powerful formal system, there exists no algorithmic procedure such that it can tell true from false statements in a finite amount of time.
It is this failure of Leibniz' dream that this blog will be most concerned with, though often in a hidden and somewhat roundabout way. This failure, far from being a crushing defeat to the search for reason and comprehensibility in the universe, actually opened up the door to some of the most fascinating developments and ideas of the past 100 years -- and perhaps, of the entirety of human history.
I am not merely talking here about things like the celebrated incompleteness theorem, due to Gödel, though that is certainly a big part of it, but rather of the more general cluster of phenomena centered roughly around notions of self-reference, such as computational universality, self-organizing systems, and criticality -- phenomena that occur along the so-called 'edge of chaos', the fine line that separates systems that are boring because their behaviour is simple, repetitive and predictable, from systems that are boring because their behaviour is random and patternless. It's along this line that most interesting things happen.
A very simple example, and by far not the most interesting one, is furnished by the behaviour of so-called cellular automata. A (one dimensional) cellular automaton is a kind of game -- a zero-player game that 'plays itself' -- that consists of a row of cells, each of which can be either on (usually denoted by painting the cell black) or off (leaving the cell white). Based on the pattern of on and off cells of the row and a simple rule characteristic of the automaton, the next line is calculated, then the next one from that, and so on. An example for a possible kind of rule would be that a cell is black if in the previous step either of its immediate neighbours was black; otherwise, it is left white.
There are 23 = 8 possible patterns a cell and its immediate neighbours can produce (writing 1 for black and 0 for white, these are 000, 001, 010, 011, 100, 101, 110, 111), and since each rule assigns to each pattern either a black or a white cell in the following line, there are 28 = 256 different elementary cellular automata. Thus, a number between 0 and 255 uniquely identifies each such automaton. This number is called its Wolfram code, after physicist and computer scientist Stephen Wolfram; much (much...) more about cellular automata and similar systems can be found in [3].
A typical example of a cellular automaton's evolution is this (whose Wolfram code is 50) (all CA pictures were generated with Wolfram|Alpha, by the way -- just type in 'rule' followed by a number if you fancy some experimentation):
Starting from random initial conditions, i.e. a random assignment of black and white cells, very quickly a boring pattern emerges: there is no point in tracing its evolution any further, nothing 'new' is going to happen.
At the other end of the spectrum, there are automata like this one (rule 45):
No apparent patterns emerge; the evolution is totally chaotic.
But between these extremes lies the edge of chaos, with automata like rule 110, which at first looks rather chaotic, like rule 45 above:
But, if we follow it for a little longer, it begins to exhibit unexpected features:
As can clearly be seen, certain localized structures emerge and persist, interacting occasionally. This automaton will neither repeat, nor decay into complete chaos -- rather, it will stay in that magical in-between zone, where interesting things happen. And in fact, in this case, there are more interesting things about rule 110 than meets the eye -- surprisingly, it is computationally universal, which means that one can use it to carry out any computation that can be carried out at all. Quite a feat for a simple coloring game!
Thus, the failure of Leibniz' dream is nothing to despair about -- in fact, had his dream proven realizable, we would live in a rather boring universe: everything would be predictable through rote calculation, through mindless manipulation of symbols. This way, at every corner, genuine novelty awaits; there is always potential for surprise and amazement in every area of study, or of life.
However, one must be careful not to rashly conclude that the universe is lawless, or that its fundamental nature (whatever that may mean) is unknowable. Indeed, the cellular automaton example blows such speculation out of the water: it is certainly a completely lawful, even deterministic, system, and the law it follows is simple enough. Yet nevertheless, certain questions about its evolution can't be answered using algorithmic methods of reasoning.
In the quest of explaining the phenomena of the natural work, one often encounters two seemingly contrary stances: those that have a certain yearning for order, for analysis, getting to the bottom of things and solving the mysteries of nature; and on the other hand those that feel a loss in tearing away the veil and exposing the magician's tricks, that revel in mystery, to whom the label 'inexplicable' is not a sign of defeat, but rather the reassuring glimpse of something beyond human reason. To the latter, the former often seem cold, sterile, and in discarding anything not measurable, they feel that everything that makes life worth living is thrown out; while the former often think of the latter as being somewhat naive, even delusional, and see a weakness in the need to 'sugarcoat' objective (and sometimes, perhaps, harsh) realities with mystery and magic.
The failure of Leibniz' dream does not mean victory for the latter group of people. Rather, it means something far more interesting: namely, that the perceived dichotomy between lawfulness and mystery simply doesn't exist! A system's fundamental dynamic may be completely known, and even boring, but nevertheless, new phenomena emerge everywhere, bringing with them all the surprise, amazement and wonder one could hope for (without being overly greedy). Granted, it's not obvious through looking at cellular automata -- it takes a special sort of person to be continually amazed by little black and white squares. But bear with me.
Dramatis Personae (Introducing the Main Characters)
You've probably noticed the (perhaps somewhat overwrought) artwork in the header of this blog. This isn't just something that I thought looks pretty, rather, it is composed of certain parts that have some connection with the things I plan to write about. They're characters in Leibniz' sense, if you will, forming our own mini-characteristica. Every new blog post will be prefaced by one or more of them, indicating what it is going to be about -- a sort of visual tag. First, for easy reference, here's the picture again, reduced to the functional parts:
We'll start on the left. The spherical object you see there is actually composed of several spheres on different levels, plus some more decoration. In no particular order:
Quantum Mechanics: This is a representation of the state space of a quantum bit, or qubit for short, known as as Bloch sphere. A qubit, as the name indicates, is the quantum version of the fundamental unit of information, the bit (short for binary digit). Unlike its classical counterpart, it is not restrained to being sternly either 0 or 1 -- rather, it can enter into superpositions of both possibilities. Every point on the sphere's surface represents such a superposition. This capability is essentially what enables a quantum computer to accomplish certain tasks much faster than any classical one ever could. In a sense, the qubit can also be thought to stand for the most basic quantum mechanical system, a so-called 'two level'-system. No matter its actual physical realization, its state space will always be isomorphic to the picture. I will use this symbol to denote posts concerned chiefly with quantum mechanics.
Holographic Principle: On the next 'level', the sphere painted with (classical) bits represents the Bekenstein bound, or more generally, the holographic principle. Faced with the puzzle of where the information -- or more accurately, the entropy -- of matter falling into a black hole goes, Jakob Bekenstein made the quite surprising discovery that there is a maximum amount of entropy that can be 'stored' within a given part of space, and that black holes saturate this bound (they were previously thought to be rather low-entropy objects). Interestingly, that maximum amount turns out to be proportional to the area of the surface of the part of space, rather than to its volume. Thus, it was conjectured that the information about the matter that has fallen into a black hole is stored on the black hole's surface, its event horizon, much the same way three dimensional pictures are encoded on two dimensional surfaces using holography -- this is, essentially, the so-called holographic principle. I will use this icon whenever posts deal chiefly with holography, entropy, or things like the so-called AdS/CFT correspondence (an explicit realization of holography in string theory).
Information Theory: Moving on, the strange patterns on the sphere's surface, which look already somewhat familiar from the earlier pictures in this post, are indeed related to cellular automata, or, more specifically, a particular automaton known as Conway's Game of Life, or just Life among the cognoscenti. Unlike previous examples, Life is a two dimensional automaton; like rule 110 above, it is also computationally universal. The pictures are examples of a particular structure known as a glider: a pattern that, after a few steps of evolution, returns back to its original configuration, having moved itself a few steps across the grid in the process. Here's an animation of a glider in motion. A Java implementation of Life can be found here. This icon will mainly preface posts dealing with information theory and related subjects.
Particle Physics: This little graph, painted across the gliders, is (part of) a so-called Feynman diagram. Feynman diagrams are a pictorial way of representing elementary particle interactions -- the somewhat tongue-in-cheek implication here being that the gliders on the sphere could be seen as some kind of 'elementary particles' in a cellular automaton world. More specifically, this is the so-called QED vertex -- an electron comes in, emits a photon, and moves on. This is the basic interaction of the theory of quantum electrodynamics, which is the quantum theory accounting for all electromagnetic phenomena -- which includes almost all the phenomena we experience in our everyday lives (the phenomena related to gravity would be the other big category). This icon will stand more generally for quantum field theory and particle physics.
General Relativity/Spacetime: This picture is probably familiar to most -- a spherical mass, together with a schematic representation of its gravitational field. Thanks to Einstein and his general theory of relativity, we know that what we call the gravitational force really is an effect of the geometry of spacetime. As John Wheeler put it, "mass tells space-time how to curve, and space-time tells mass how to move". This icon will stand for general relativity in particular, but also for theories and considerations on the nature of spacetime in general.
Emergence: This picture, looking somewhat like a depiction of magnetic field lines, is actually a schematic representation of the momentum space topology of a condensed matter system known as a Fermi liquid (or to be more exact, a certain universality class thereof). Let's take this slowly: Fermions are elementary particles whose spin (roughly, a quantum mechanical analogue of angular momentum) can take only half-integral values, i.e. 1/2, 3/2, etc. They obey the so-called Pauli exclusion principle, which means that no two of them can be in the same state at the same time. Thus if you pack a bunch of them together, the available energy levels get filled gradually, such that below a certain energy, all levels are filled, and above, none will be. This boundary -- called the Fermi surface -- between occupied and non-occupied states is what the sphere in the picture represents.
Typically, the fermions in such a system will be coupled to each other, as in this system of mass points and springs. Each disturbance of the system will thus propagate, like ripples on a pond. These ripples again will have a certain energy, and it is their energy the 'field lines' symbolize. In some systems, the ripples will have an energy some fixed amount 'above' the Fermi energy boundary -- these are called fully gapped systems. However, more interesting things happen in systems like the one depicted, that exhibit so-called Fermi points, where the energy of the excitations vanishes. The story is a bit longer and more complicated, but as it turns out, the excitations in the vicinity of these Fermi points resemble very much the elementary particles of our universe! I will use this icon to stand for condensed matter physics specifically, or the notion of emergence in general.
Philosophy: Moving on to the other side, the first thing that leaps to attention is a silhouetted head with a blank where the brain ought to be. However, this should not be taken to suggest mindlessness! Quite to the contrary, the white space is more analogous to an empty thought bubble, the form of a thought, waiting to be filled with content. It is the state of mind before thought: ready and anticipating. This icon will stand for philosophy, and especially for the philosophy of mind.
Computation: This complicated apparatus is a part of Charles Babbage's analytical engine, the planned, but never completed, successor to his difference engine, an early automated calculating machine. Broader in scope and capacity than the difference engine, the analytical engine was the first machine to be computationally universal -- or rather, it would have been, had it ever been built. It was fully programmable -- in fact, a program to calculate Bernoulli numbers using the analytical engine, written by Lady Ada Lovelace, daughter of Lord Byron (yes, the first programmer was a woman), still survives. As the first computer, this icon will stand for computation, in theory or in practice.
Updates/Internal Matters: Finally, the inkblot represents -- well, an inkblot. Ink spilled from a carelessly handled quill. Nothing more to it.
Well, that's not quite true, perhaps. In fact, there's an interesting story about inkblots and Leibniz that I would be remiss not to relate here. In his Discours de métaphysique [4], Leibniz asks the reader to imagine a page that has been spattered with ink. He then notes that there is always a curve that passes through these points -- even though their distribution is completely random. It seems thus that merely having a mathematical description of something does not imply that it actually follows some rule. But then, how should one tell 'lawful' from 'lawless' systems?
Leibniz' resolution is roughly the following: if there is nothing that can be gained from describing the set of randomly distributed points through a mathematical law -- i.e. if the purported law itself is as complicated as the spatter-pattern -- then there is no merit in the mathematical description; the pattern is effectively random. This foreshadows important concepts that we will meet again on this blog, such as algorithmic information theory and Kolmogorov complexity.
For the moment, however, this particular inkblot is just an inkblot; I will use it simply to denote posts that feature updates, personal notes, or any other flotsam and jetsam that does not fit with the things I am planning to post, but nevertheless want to write about.
Dénouement
If all of this seemed a bit much, don't worry -- consider this post more as an extended table of contents; it's meant just to name, not to fully explain, the topics and themes this blog will revolve around. I hope what little explanation I gave was sufficient to convey at least a taste of things to come, so to speak.
Also, if this post appealed to you at all, then there's a good chance forthcoming ones may, too; I will naturally go more into the depth of the subjects in blog posts dealing with them explicitly, but on the whole, I plan to go neither overly technical, nor to distort the subject in an attempt to spoon feed you appropriate soundbites. This is a fine line to straddle -- it's a bit like the edge of chaos, again --, and I'm very open to any and all feedback, or further questions.
The main aim of this blog is to collect and present interesting ideas, some of which are just now developing at the vanguard of science (like the holographic principle), some of which already have been around for a while, sometimes longer than is generally believed (like computation), all with an eye towards exploring the common currents underlying them.
However, everything you read here should be regarded the same way you should regard stuff you read on the internet in general -- skepticism is always healthy, and especially warranted where my own views diverge from the mainstream, or I discuss my own ideas, which I will always strive to point out.
I'll be somewhat more 'formal' than most blogs in that I'm going to source things -- i.e. for certain claims or quotes, point out where they are taken from, as in the 'References' section below. This isn't done out of academic stiffness -- the reason is merely that if you consider something I write interesting, you won't have to dig around looking for resources to read more about it. Whenever possible, I'll include a link to texts available online.
I think that's about all I wanted to say in this introduction -- well, actually, it's rather more than I originally planned to say. I had not set out to write about cellular automata at all, for instance. This is characteristic of systems on the edge of chaos: they tend to develop their own momentum. Let's hope it will lead to interesting things.
References:
[1] Leibniz, Gottfried Wilhelm, The Art of Discovery 1685, Wiener 51
[2] Hartley Rogers, Jr. 1963, An Example in Mathematical Logic, The American Mathematical Monthly, Vol. 70, No. 9., pp. 929–945
[3] Wolfram, Stephen, A New Kind of Science. Wolfram Media, Inc., May 14, 2002. ISBN 1-57955-008-8
[4] Leibniz, Gottfried Wilhelm. Discourse on Metaphysics and the Monadology, 1686, parts V and VI; online text
Brilliant.
AntwortenLöschenwonderful!
AntwortenLöschenI just found your blog. This is highly interesting, and I'm excited to read more. Thank you for writing this!
AntwortenLöschenDanke Schön!
AntwortenLöschen