Friday, April 21, 2023

Yet Another Weary Blissful Dawn…

 

Yet Another Weary Blissful Dawn…

 

Synopsis:

Our past has shaped the nature of humanity into something ill-adapted to modern life and even less adapted to a viable long-term future.  If we do not adjust to changing realities, we are doomed as a culture, and even as a species.   Instead of making serious progress towards long term viability as a species, we have survived one futile, illusory liberation after another.  The fundamental problem lies in those aspects of our nature that render our societies dependent on competition, money and violent conflict between our personal values and rival values.  All our most effective structures of society have been adversarial, to the point where adversarial competition has come to be seen as a virtue, in spite of its elements of patent counter-efficiency.   To outgrow this juvenile phase in the development of our species, we need not so much a change of human nature, as a change of emphasis on key components of human nature.  The first stages of the changes could largely be attained through education rather than simplistic technology.  The objective would be a community with as much drive and creativity as the best of free enterprise, and as much efficiency and selflessness as an ant colony — or more.  

________________________

 

 Any system, including any ethical system,
that by its own nature implicitly strives against its own success,
whether it succeeds or not, deserves not to survive;
natural selection militates against it
Anonymous

As a species we are perpetually in demanding and uncertain transition.  We have outgrown the economics of our origins and have yet to develop economics that will remain viable in our future.  We arose from a scattering of quarrelsome family groups and we never raised ourselves above the mindsets that evolved to sustain us for our first hundred thousand years or so of modern humanity.  For perhaps twenty thousand years, those mindsets have driven us repeatedly to attempts at piecemeal destruction of our kin and ourselves, repeatedly to be frustrated and rescued by our own lack of vision and our technological impotence. 

 These juvenile incapacities we are outgrowing. 

 We now face transits more demanding than in the past.  Stumble, and humanity will die wholesale within the next thousand years or so, a dingy palaeontological incident in a solar system with most of its life behind it. 

 The trouble is that our inborn and rationalised values, vestiges from our past, entail conflicts of interest that threaten our future.  Our minds' naïve desires conflict with those of our bodies; our bodies' with those of our families, our parents and our descendants; our families' with those of our communities, our communities' with those of our nations …

 To see where our Darwinistic past threatens to lead us in the future, examine our political history.  We desire to be personally powerful, or to be led by the powerful.  The effect has been to leave us hostage to thugs, mobs and parasites.  Blissful dawns came and went throughout history whenever particularly spectacular parasites made way for others; localised flickers of hope as some well-meant power structure arose around a leader, a group or a family, only to be snuffed out by the decadence or greed of  our newly absolutely powerful demagogues, or of their successors, or by invaders.   It is not bombs and gases we need fear, so much as those relative timings of our future transitions of power.  

 Imagine what would have happened if Hitler or Stalin or Hideki Tojo or even Mao had got the bomb before the western powers did.  If the globalisation of technology pre-empts the civilisation of our spirit, we are in for a rough ride; probably a short ride at that.  As long as control of our communities goes to the power seekers, and as long as power corrupts, we are doomed to dawn after false dawn until one bloody dawn too many proves false. 

 Until we manage to shed this tribalistic village tyrant fixation, things look bad.  Mentalities hobbled by peasant vices certainly can lead the world, but they generally lead downhill and mainly to the short-term gratification of themselves and their Orwellian guard dogs and sheep.  Stalin and Mao demonstrated this convincingly, if so far futilely.  The virtues we need are more like the virtues of the ant nest or of
H G Wells' Selenites, than of Louis XIV, Hitler, or the followers of either.  It is depressing to observe that the Politically Correct regard communities, even fictional communities, in which clashes of interest are eliminated, with greater horror than communities in which clashes of interest drive everything and defile and destroy everything. 

 To be sure, we are not ants, and nor were the Selenites.  But neither are we Gadarene swine — and it does not follow that in abominating the automatism of the one, we must make a virtue of rushing to our own destruction like the other.  Fashionable cant states either that laissez faire policy in an adversarial social system is the best option possible, or that we must drop all alternatives in enforcing some particular religious or political dogma.  

 And yet this dread of ant-nest politics is as needless as it is futile.   As a species we now have nearly everything we need for breaking out of our biological mould and constructing a culture to bear us through the next billion years or so.  We need not forfeit our nature to achieve civilisation, we need only develop it selectively.  What humanity needs is a spirit in which the leader (if such a thing would still exist) would die rather than betray a follower, but would sacrifice a follower rather than the community.  A follower, in turn, would die rather than betray a leader or a fellow, but would sacrifice either rather than the community. 

 Note that I do not say that we should act that way as a matter of abstract principle, but as a matter of personal values.  That is simply the way we should feel and act, not the way we should interpret our duty.  If we achieve that, then we need change nothing else; the rest would follow.

 The germs of such altruism exist within us already, though they certainly could do with a bit of nurturing and steering.  The problem is to make them dominant over our memes of self-interest and to protect the  community from external parasites that would otherwise exploit the goodwill of the rest.  In short, the problem is to build the healthy community into an ESS, that is, an evolutionary stable strategy, a structure that would be resistant to invasion by rival strategies.  Simplistic altruistic communities for example, tempt parasites to exploit the generosity of their fellows, so they are not ESSs, but are susceptible to parasitism.  

There is a fairly popular view that the proper social model would be anarchism, but I never have seen any proposal for such model that was not open to the rankest exploitation by the clever, violent, greedy, and self-centered. In other words, one that amounts to an effective ESS. Nor have I seen such any proposal for an anarchistic community, that could stand up to opposition by a functional social structure with its built-in checks and balances, bills of rights, commitments and responsibilities and powers of authority and direction. So, until human nature changes, I do not see any prospect for anarchism to be of any use as a community structure of any value in itself, or to its members, unless it incorporated formal powers of community control, and that formally would violate the very principles of anarchism. 

 Not much of an ESS!

 The second problem is that if the community merely opted for a life of interminable complacent peasantry, it would be doomed in the long run.  Humanity needs to get off the planet, emerge from the eggshell and its smug delusions of security.  We need commitment to schemes that would transcend the scope of each current generation and make long-term policies viable.  For instance it is technically attractive to make Venus a home for an affluent population twice the size of what Earth can support, powered by plentiful renewable energy, but the project would take perhaps some thousands of years.  For the species this would be a minor investment, the merest training run for serious entry into space and interstellar pioneering.  But as long as we are driven by, and limited to, a vision of present personal profits, any such long-term scheme is the idlest of pipe dreams and our horizons remain those of individual mortality and perceived material profit. 

 One could construct whole families of Utopian prospects consistent with universally open, sincere, trusting citizens.  The fundamental question though, is whether the community could combine a lack of personal competition with the vigorous growth that the drive of constructive competition can bring.  In short, what is to replace the sacred free market and the profit motive?  Who is to direct the ants and supply the drive?  What is to supplant, not merely money itself, but the role of money, not the medium of exchange, but the channel for the energy that drives industry and nourishes hubris? 

 Hubris is a dangerous infection in a community of fellahin with megalomanic leaders.  We have no shortage of historical horrible examples, not all of them as spectacular as the Great Wall and the Pyramids, but alarming in principle and in their results.  If Donald Trump served for anything positive in his life, it was as an example, nauseating, but dreadful; however ghastly he and his doings might be, the real nightmare is not Trump as a loser, but his Gadarene followers; they might be the most pathetic of the losers, but they are the operative threat to the rest of society, even to the rest of humanity. 

 In contrast, in an educated community of shared commitment, hubris in the spirit of the tower of Babel, would become our strength and our salvation; this has a good biblical basis: "Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do." To anyone of good faith, that would sound like a passionate recommendation, but to the peasant authoritarian theocrats of the day, it was presented as an evil, and passionately to be opposed. 

 As I said, it should be possible to achieve such ends purely by education that exploits existing bases of our value structures, but the resulting social structure would be metastable.  Only let a nucleus of parasites, such as present day gangsters or politicians, form an effective subversive movement, and it is hard to see how we could avoid falling back to the most miserable serfdom in human history. 

 However, if we could commit to breeding for those values, our future would be fairly well assured for as far as human eye can see.  Parasites would be isolated and the community would not be mindlessly trusting, but alert and ambitious, embodiments of the conscious tit-for-tat principle that: "If a man does thee once it is his fault; if he does thee twice it is thy fault; and if he does thee good, it is thy desire to do good to his values."  Anyone seen to be acting in bad faith would be recognised thereby as necessarily anti-social. There is a good Christian adjuration favouring this principle:

"Beware of false prophets, which come to you in sheep's clothing,
but inwardly they are ravening wolves. 
Ye shall know them by their fruits.
Do men gather grapes of thorns, or figs of thistles? 
Even so every good tree bringeth forth good fruit;
but a corrupt tree bringeth forth evil fruit".

 Impossible?  Hardly.  In biological terms we have achieved more than that in breeding dogs, and done it quickly and repeatedly.  Furthermore, we have the physical resources; the obstacles are sociological, not technical.  No cloning, no restriction endonucleases or transcriptases, just a bit of national and international family planning and education.  

 Do we really want a future for our species, in which everyone is for everyone, where money becomes a measure of effort rather than a medium of currency?  What happens to mental independence, creativity, competition, passion, all the things that drive us and make us human?  

 What indeed?  Nothing in the suggestion precludes any of these things; all that is necessary is a subtle change in the orientation of our value judgments.  No doubt one ant is as simple minded as the next, but suppose citizens agree on common goals and commit uncompromisingly to those goals, in unquestioning confidence that they and their families and communities will equally uncompromisingly be fed and cared for: they could enjoy (or suffer) as rich an emotional and intellectual life as our choicest  spirits today.  In fact, for us to have a resilient community, citizens would have to develop their physical, emotional and mental faculties to the fullest. 

 Conversely, nothing in this picture guarantees agreement on everything by everyone, nor indeed by anyone, neither on facts nor theories, neither on ends nor means.  Where there is disagreement, there would be argument and competition.  The only difference would be that resentment and dishonesty would be overshadowed by persuasion, trust and goodwill.  A painless community, boring and small-spirited?  Small souls, unloved by God and uncoveted by the devil? 

 I hardly think so; certainly not in comparison to Homo mediocriter as we have bred the species for millennia.  Eric Hoffer had no need of any brave new world to observe that: "Where men are free, they usually imitate each other." 

 And realistically?  Granting (as most readers will no doubt decline to do) that some such vision is desirable, how do we get to there from here?  

 I don't know. 

 I would despair utterly, but I am tempted to hope by the fact that there have been some influential movements towards positive advances in civilisation. 

 Consider the population problem: so far only the Chinese have tried anything really assertive, and they made a typical politicians' mess of it. If they had stopped to think, then instead of drastically trying to limit procreation to one child per family, they would have aimed at one per person. The resultant population reduction would have been sociologically benign, driven only by those who could not or would not reproduce. It would none the less have left scope for encouraging the reproduction of the most socially welcome and would have decreased the hysterical pressure for boy children. Though too gradual for most people to notice, the scope for encouragement of healthy, productive, socially beneficial offspring should be rapid in evolutionary terms.

 Stammering and confused the gropings towards social and biological improvement may have been so far, but many of the idealists have been among the intelligentsia and the intelligentsia have included most of the technocracy.  In comparison to the social obstacles, the technical obstacles are so trivial that I cannot help hoping. 

 

 

Wednesday, April 19, 2023

No Point

So, What IS the Point?  2

The voice of the caterpillar  3

Philosophy and Science  6

Primitives: a toast to bottom‑up  10

Science, Sociology, Substance  17

Science, Dogma, and Scientists  18

Occam, Ockham and all that 19

Magic. 21

Cliché: Why is There Anything At All?  22

Is There Really Anything but Solipsism?  23

Why "Why?"? Well, if Anything: "Because why". 25

Operation à la mode  29

The Stubbornness of Underdetermination. 30

Semiotics, Language, Meaning, Comprehension  33

Fundamentals: Choose your turtle  40

Fundamentals: Axioms and Assumptions  41

Brass Tacks  47

Preconceptions, Mathematical and Other  47

Pure Contention  47

Science, evidence, and near‑proof 56

Guess, Grope, Gauge, Accommodate  59

De‑, In‑ and Abduction  60

Common sense and logic. 71

Conjectural lemmata  72

Gedankenexperimente: Thought experiments  73

Infinity, Finity, and Cosmology. 75

Entities and atomism and not much confidence. 75

Atoms in spaces. 77

What does it take to make a dimension?  81

What Then, Is Physical Algebra?  84

Cause, causality, and implication  86

The Emergence of Emergence  101

Nothing with no time, and no time with nothing  103

Media, Messages, Observations, & a Hint of Semiotics  104

Entities in emergence and reductionism   105

Putting together timelines to please Theseus  111

Prediction of the Knowable and Unknowable  113

Emergence and epiphenomena  117

Levels of emergence  118

Generic and Specific Emergence  120

Emergence, Scope, Scale, and Type  123

Tomic Entities, their Origins and Fates. 127

So What?  129

Generalisation, Reductionism, Reductionistic fallacies  129

Existence, Assuming it Exists. 132

Indestructible Information  137

Existence of Entities in Formal systems. 142

Existence of Entities in Material systems. 149

Euclid’s and Others’ Excesses  152

Nothing Determined  157

Determinism, Information, Time's Arrow   160

 

So, What IS the Point?

Fanatics
. . . . may defend
. . . . a point of view
so strongly
. . . . as to prove
. . . . it can't be true.
. . . . . . . . Piet Hein

A large part of the content of this essay deals with philosophical topics that have been chewed to rags again and again for millennia rather than centuries. I am not even a philosopher, so what qualifies me to rouse the sleeping dogs yet again?

That I cannot say in detail, but many of the standard philosophical questions I discuss, mainly questions relevant to science, still puzzle and confuse most people, including me; and yet those puzzles seem to lose their substance, sometimes even their interest, when regarded in terms of information and techniques for handling information. And some of my points seem to me to have attracted too little serious attention.

The effects are so strong that I increasingly find it difficult to read some fairly new works, 20th or even 21st century, for sheer irritation at the persistence with which they miss the essence of several pivotal problems. I’m not saying there are no good new works; some of the later works, and even some of the early works, are excellent and inspiring, but the frustration persists down the works of the centuries.

Is this just ignorant arrogance on my part? Possibly, but if so then my views should be easy to refute, and to refute with something better than pointing out (often correctly) that I plainly had not read Descartes and Kant and Nietzsche and Hegel and Marx and Tolstoy, and who was that other fellow ...? And particularly had not read them in the original editions, or languages. So I bare my breast: make the most of the opportunity.

In particular, I find a few of the works of scientists on the philosophy of science superior to the nearly all of the works of philosophers on the philosophy of science. A particularly fine example is Arthur Eddington's "Philosophy Of Physical Science", in which he showed how much more valuable it can be for someone writing on the philosophy of a subject to know the subject, than to know philosophy. After all, plenty philosophers writing on philosophy show how it is possible for a philosopher writing on philosophy, not to know the subject either.

And current popularity of sources need not be cogent: for example I am not keen on Popper either, even though he still is fashionable among undergraduates and even graduates who have not thought seriously and independently about falsification. As I see his work, it looks self‑indulgently shallow and erratic; not that I claim to be the only one with that opinion.

As for what I offer here, I offer it with little apology; I hope anyone who reads it will do so with enjoyment, even if only sardonically, and that some readers will profit from it immediately, but that they will profit even more in due course.

One matter I do apologise for, is the structure of the essay or at least its lack of structure; I would have liked to present everything in a simple, logical sequence, but I am increasingly persuaded that there ain't no sich thing. The world consists of bits, items, that entangle each other in all sorts of ways at once, so I have to wander as I wonder. I beg no forgiveness, but I do at least assure you that it is not by choice that I am incoherent, much less out of malice.

The voice of the caterpillar

I know almost nothing about almost nothing,
and absolutely nothing about everything else.
However, I do not vaunt this accomplishment
as unique, or even unusual,.

 



 

So, who am I? 

As Alice did in her day, I in my turn recuse myself from answering that question; in my case because I assume no authority and assert no matters of opinion as matters of fact, so it seems to me that the question of who I am is hardly relevant.

As for authority, I am not immune to authority, and life is too short for rejecting all authority. And for my part, I claim no authority but my own opinion (an ill-favoured thing, but mine own) and my own opinion remains my own opinion.

All the same, authority, however respectable and respected, constitutes, as such, neither proof nor logic; as a healthy principle, remember how Horace put it in classical Roman times, and the Royal Society in the 17th century reaffirmed: “Nullius in verba”. I am partial to a bit of that myself, and I commend the principle to readers.

For many reasons however, that principle itself is not an absolute; people speak of believing only what you can see, and rejecting whatever is stated as fact without support and all that hard-nosed, commonsense, intellectually independent Good Stuff, but resources, particularly time and one's own intellect, are limited. Consider: one cannot personally verify the truths in one's own field of knowledge, let alone all human knowledge. Even mastering the sense or establishing the factuality of every statement made inside a university lecture theatre is beyond anyone. We cannot delay Biology 101 while each student personally verifies every individual assertion presented in class. Nor can we delay Biblical Hermeneutics 101 while each student personally decides whether to accept the book of Job as literally true or as allegorical, in conformity with the associated curriculum.

The best support I can offer for my own views is observation, deduction, conjecture, and speculation, with perhaps too much rationalisation. Of course the fact that I do so from inside the world I inhabit, entails certain limitations. I do not claim to be any better than a product of my times, nor to be original. As Kipling put it:

Raise ye the stone or cleave the wood to make a path more fair or flat;
Lo, it is black already with blood some Son of Martha spilled for that!
Not as a ladder from earth to Heaven, not as a witness to any creed,
But simple service simply given to his own kind in their common need.

Mind you, I do try to credit my sources when I am conscious of them. Bierce said, in the introduction to his Devil's Dictionary: "This explanation is made, not with any pride of priority in trifles, but in simple denial of possible charges of plagiarism, which is no trifle." People who reject my versions of observation or those of anyone else, may do so without incurring my resentment, but without necessarily securing any commitment of mine to accept or even respect their preferred opinions.

If I so much as offer food for thought, and raise questions I cannot answer, but that others might build upon, it would please me to think that I stimulate anyone to think to some purpose, possibly to attack material problems, or possibly for fun. Or both.

From time to time while writing this, I understandably have been partly encouraged and partly disappointed, though not at all surprised, to find that some aspects of concepts related to those that I have formulated, and still am developing, were emerging in mainstream mathematics and physics and science and life in general long before I started; so certainly some aspects of my views are far from new.

As I said: that is no surprise. However, I had encountered the concepts by idiosyncratic routes of my own, so interested persons might find some to be worth another look in the hope of fresh perspectives, even if some of those concepts are not at all new.

That is why I now write this essay.

As for style, format, and similar items, readers might like them or write their own essays to their own tastes.

For example, some readers dislike epigraphs and quotations.

Tough.

I like mine.

 

Philosophy and Science

Philosophers say a great deal about what is absolutely necessary for science,
 and it is always, so far as one can see, rather naïve, and probably wrong.
Richard Feynman

For many centuries philosophy was confused with science, largely disastrously. In the period during which science and philosophy were such small fields of study that many polymaths could master all the known works, there were so many holes in both classes of study that both classes suffered: science suffered from the preconceptions of the philosophers of the day, and philosophy suffered from misconceptions of the scientists of the day.

Then the eruption began to break through. That volcano had been rumbling for a long time, but for several centuries, starting perhaps about the time of Galileo, developments in the study of reality, increasingly had become so coherent and compelling that science was seen as increasingly threatening to the dogmata of the authorities; those authorities became really nasty about those threats; it was a long time before the thugs began to realise that no matter how fast you burnt heretics, there was a limit to how deeply you could bury the implications of reality, especially if you yourself didn't understand the implications of the reality.

Or for that matter, the implications of the dogmata.

Or even the implications of the actual beliefs and conceptions of the believers, irrespective of the dogmata.

Many of those authorities still have not realised that if they were to write down their dogmatic claims, and all copies of their texts were lost or forgotten, those dogmata would never resurface in the same form again, except as variations on the same weary rhetorical fallacies that bullies use to justify their bullying. In contrast, the facts of life and the fruits of valid reasoning, as revealed by the activities of researchers, if similarly lost, would keep resurfacing for as long as new children are born asking inconvenient questions and sometimes following them up. Especially if those children grow into scientists seeking cogent answers, convenient or not.

Those dogmatists failed to understand that the essence of their problem was that they were trying to persuade water to flow uphill. Commonly they still fail in that understanding. Those who fail most catastrophically are the wishful thinkers who resent their own inability to grasp scientific advances, and the parasites who sell the dogma to wishful thinkers, whether the dupes have scientific ambitions or not. Occasional specimens who do in fact have the mental equipment, may react so bitterly against their own fears or resentment of reality or authority, as to deny the undeniable.

Not that that bothered Jane and Joe Average, who regard it as a virtue to follow the herd in ignorant, irresponsible loyalty to the cruel, greedy, and ridiculous.

All such classes of wishful thinkers are the lapdogs of those Authorities who have axes to grind, and who will stop at nothing to support their social and intellectual parasitism: their dupes or victims are those who just want to accept, without effort, comprehension, or doubt, whatever their exploiters assure them, irrespective of any basis in sense or fact or honesty.

So much for universal education once the politicians get their claws into the rabble.

"I saw it on TV" or "My friend saw it on the Net" or "The experts all say that (check one of: cooked food, raw food, red meat, boiled water, dairy products, or gluten) will give you straight hair and curly teeth…"

If there is a limit to such slavishly mindless idiocy, I have not seen it yet.

At some time, arguably during the nineteenth century, science grew too large and too obtrusive for the Authorities to ignore, or for most polymaths to master — not that some of them didn't delude their fans into thinking that they had mastered it all the same. By about the mid‑20th century, after the flood of discoveries and reasoning had grown beyond the capacities of any one person, that air of smug omniscience faded into gnashing of teeth and bewailing of specialisation and silo mentalities and interdisciplinarity.

In other words some of the would-be Authorities could no longer support their pretension of personal access to omniscience, and they hated to admit it. Of course, some still refuse to admit their ignorance; or they even represent it as a virtue, but those impress only the dupes who are unsalvageable anyway.

But none the less, those dupes that commonly constitute the voting majority or thug secret police.

For my part, if you read much of this document and think that I think I know a lot, let alone know everything, or have access to knowledge of everything, then don't bother to ask for your money back: you didn't read carefully enough to earn it.

The philosophy of science in days of old lagged simply because most philosophers weren’t deeply or functionally interested in science itself (to judge from the topics that they published or things they said) or weren’t scientifically trained to sufficient depth to enable them to support or broaden and deepen their views; or they were writing on the basis of assumptions that the scientific competence of the day could not adequately deal with. There still are many such writers. Some are scientists who had not turned to philosophy until they had passed their philosopause, and so their philosophy tended to be flaky, even if perchance their science once had been competent.

Sound science and sound philosophy do not guarantee each other, but unsound philosophy does augur poorly for robust science.

On the other hand, many modern professional philosophers who did try to write on the philosophy of science tended to be writing on the basis of their own readings of the science of two or three generations earlier; commonly the results were so pathetic that by about the mid 20th century there was increasing backlash from working scientists, who could tell that for a long time the philosophers in question hadn’t known what they were talking about.

Unfortunately, scientists who, in their reaction, impatiently reject the philosophy of their own fields of expertise wholesale, thereby cripple their ability to advance their own fields of study and practice. They thereby invite stagnation, confusion, and even retrogression.

The field of philosophy of science in which the practitioners have produced the worst of such superficial nonsense in my opinion, is metaphysics, not that I am prepared to defend that opinion. To describe the basic concepts underlying reality, metaphysical work is necessary in certain contexts, such as in establishing basic concepts of reasoning and knowledge, but all the same, no field of human thought is more in need of continuous sceptical criticism.

Except religion perhaps.

Because metaphysics is so open to arbitrary mystical ideas from writers who see their own intuitions as cogent, necessary truths, metaphysical work is prone to rampant pathological growths if not checked regularly for consistency with empirical evidence.

And "evidence"? What is that?

Evidence is any information that might rationally affect anyone's assessment of the relative strength of relevant rival hypotheses.

In classical or pre‑classical times, some schools of philosophy rejected all inconvenient demands for consistency between logical and physical evidence, on their assumption that reason was infallible, whereas our senses were subject to error. Well, however fallible our senses, reason certainly is fallible, as we can tell from the radical, and often bitter disagreements between reasoners, but anyone who nonetheless insists on the infallibility of reason, falls foul of the implications of reason itself, as follows: one important formal logical operation is that of implication, and in formal logic a truth cannot imply anything but truth, so reason tells us that any proposition that predicts an observation that contradicts our actual observation, cannot be true.

That is not the whole story, of course; real-life, commonsense implication is a treacherous beast at best, but still, the challenge of reason by reason remains powerful.

Typically, mystical or metaphysical ideas that predict wrongly, accordingly are neither cogent, necessary, nor even true in the sense that their predictions, if any, are novel and successful. This is adequate demonstration that the philosophers in question indeed hadn’t known what they were talking about. Furthermore, like so many religious fundamentalists, their "cogent, necessary truths" might have been more interesting, if not necessarily conclusive, if their respective apologists could persuade each other.

Commonly they do not. Quot homines, tot sententiae. And the very fact that rival arguments of reason could disagree, implied the fallibility of reason, of observation, of interpretation and of assertion. Disagreement means that at most one of the disputants could be at least partly right on each point; it also is perfectly possible for them all to be radically wrong about whatever they see as truths that they would be willing to kill for. And not just wrong in minor degree or detail, but radically wrong in principle as well as gross matters of fact.

It happens ...

Conversely, whenever the failures of reason can be ignored, that very fact implies that the reasoning itself has no real-life relevance, and therefore should be dismissed or ignored. What metaphysics, independent of science and reality, might be good for, apart from formal philosophy, is not of much concern in this discussion.

Intrinsically, this essay is largely a discussion of applied philosophy — not of purely formal argument.

The backlash from the working scientists stemmed largely from the late 19th century and onward, when the frontiers of science started encroaching on fields in which the results looked like nothing better than nonsense to the layman, and were fraught with traps for philosophers who had failed to keep informed on so much as the bald facts of the disciplines, never mind their implications. In this respect they themselves qualified as laity in relevant aspects of science.

A mild example of how far philosophers' brusque, down-to-Earth pronouncements on science can be downright wrong:

 ...philosophers have said before that one of the fundamental requisites of science
is that whenever you set up the same conditions, the same thing must happen.
This is simply not true, it is not a fundamental condition of science.
The fact is that the same thing does not happen,
that we can find only an average, statistically, as to what happens.
Nevertheless, science has not completely collapsed.

Richard Feynman

 In the early 20th century the philosophic implications of scientific advances began to develop a lot of traps for the scientists too. There still are rival camps bandying more insults than insights ...

And what was worse, some such philosophers simply did not realise why it mattered. In the laboratories and the field on the other hand, there was a growing tendency to comprehensive rejection of philosophy of science, especially from the “shut up and calculate” school of thought.

That might seem to be pretty attenuated cause for concern, but philosophy is supposed to be the discipline that deals with thought about thought, and science is a field that is so dependent on a high standard of thought, that there is hardly a discipline more important to educated scientists than philosophy of science. If that is beyond him, how can we see him as more than half a scientist, if that? Half-sense is rarely better than non-sense. A practitioner in a scientific field, one who eschews thought, might well be a genuinely valuable hack worker — never let it be said that I disrespect hackwork — but it is unusual for hack workers to produce fundamentally novel intellectual or material breakthroughs.

Genuinely valuable data and technology they often deliver, but, however important, those are something other than intellectual breakthroughs. It is in such aspects that we see the differences between the roles of theoreticians and experimentalists. Neither category is adequate on its own.

I do not suggest that the experimentalist is necessarily a hack worker mind you! Brilliance in creativity occurs in all walks of life and all sorts of functions. And theoreticians can be quite as pedestrian and as intellectually adequate as any jingle writer.

In some circles, both philosophers and scientists, there has long been a tendency to contemn the history and philosophy of science as trivial or useless, but lately there has been some improvement. History and philosophy of science currently are burgeoning as fields of study, and publications reflect the tendency. I’d hate to have to supply supporting figures though, especially figures for how valuable specific works in the philosophy of science might be.

Still, in the last few decades my readings largely have been disappointing. Material I find about communications and mental processes and reductionism and emergence and many more, solemnly masticates themes that commonly seem to me to be repetitive, redundant, irrelevant, out of date, or downright mystical. They seem to ignore the nature and relevance of the role of information in the processes and principles and concepts under consideration.

Maybe I just have been unlucky in my reading; I hope so. But has my bad luck been bad enough to explain my misapprehensions?

Now, a quick word on one of my heroes, Richard Feynman, and his views on philosophers' views on science. In that context I did not get my views from him, hero or not. He is alleged to have asserted, among other uncomplimentary pronouncements: "Philosophy of science is about as useful to scientists as ornithology is to birds". It does not much matter how accurate that attribution might be; it is in line with some of his other remarks. But we can discount his invective as irrelevant, because Feynman was not above making unphilosophical assertions about science and unscientific assertions about philosophy. And he himself commonly made philosophical assertions about science; often penetratingly.

Far more relevant would be his views on the reliability of the views of experts, in particular:

"Outside of their particular area of expertise
scientists are just as dumb as the next person
".

Quite.

I agree, but with the reservation that I regard the very term "scientist" as sloppy terminology: as I see it, the relevant concept is not the role to which one might apply the term “scientist”, but scientific behaviour as an activity. And someone who by calling or profession is committed to scientific behaviour, is no more to be relied upon not to deviate from it, by either error or intention, than an ecclesiastic can be relied upon not to deviate from virtue.

But, as a layman in philosophy, speaking about philosophy, Feynman didn't do too badly on average: perhaps somewhat better than laity speaking about science. As you may see from some of my quotes, quite a lot of his lines are straightforward philosophy of science, though he did not claim them to be anything of the sort; not in any claims that I read, anyway.

 

Primitives: a toast to bottom‑up

Everything should be built top-down, except the first time.
 Alan J. Perlis

One of the basic concepts in science and philosophy, very near to something ineffable and undefinable, is the idea of a thing: an entity. I am not sure whether to regard that concept of "entity" as a primitive, either at all, or possibly as a class of several primitives that deserve separate comprehension. Whether the concept of entity is a primitive or not however, I usually say "entity" when I speak loosely of something's "thingness" and say "thing" when I refer to something — some entity — without necessarily considering any particular aspect of its attributes.

With apologies to every reader familiar with the concept of primitive concepts, my excuse for explaining the term is that I have on occasion been abused by persons who thought that "primitive concepts" or "primitives" had something to do with savages, and that I was sneering at such savages as being inferior, mentally or otherwise. Or they thought that to speak of something as "primitive" was to disparage it as being appropriate only to savages.

So bear with me here: a primitive concept in a particular context is one that we take as basic, as something given, something that cannot or need not be broken down further into anything simpler, that cannot or need not be explained in terms of more fundamental concepts, or that in context we have no need to simplify further for the purposes of our discussion.

Typically the importance of a primitive is that we can use it as the basis or part of the basis for more complex concepts or relationships, much as we use axioms in mathematics as the bases from which to derive theorems of arbitrary complexity.

But the term is relative. I do not assert (I just do not know) that in our universe there exists any such thing as an absolute primitive, but primitivity is a convenient concept — possibly even a necessary concept — in a universe such as ours, in which there is no physical capacity for infinities, so I do assume some absolute primitives, whatever such things might be; indeed, I make that assumption as a matter of convenience.

And sure enough, the concept of primitives crops up frequently, including in this essay. Just do not assume when you see such a reference, that I am under the illusion that I am formally proving anything; take it as a concept on which one can base assumptions for the sake of discussion, analogous to the Euclidean assumptions of points, lines, planes, and the like, none of which has any physical reality.

Now: as I see the idea of an entity, or use the term, it is whatever you could think in terms of. Whether it is a well‑demarcated physical object such as a crystal, or it is a poorly‑demarcated object, physical or otherwise, such as a ball of fluff or a crowd or a concept or a river or a species or an event or the state of something (say fluidity, or spin, or colour, or anger) or whether it is an imaginary object such as a unicorn, or an abstract object such as a number — anything you could give a name to if you wished to discuss it — you could, in a suitable context, regard it as an entity. Sometimes there are tricky examples, such as objects that cannot be distinguished in terms of quantum mechanics: when two electrons from separate sources have collided and have recoiled along separate pathways, then if it is at all meaningful to say which of the two outgoing electrons corresponds to which incoming electron, we do not know how to say which is which, either for lack of means of measurement, or, more fundamentally, for lack of physical meaning to the idea of their having respective identities at all.

But it often is unclear whether such marginal examples are relevant in practice.

An entity could be atomic, a concept I discuss later, meaning that that entity cannot be split into simpler units, either at all, or without changing its nature. So far, so simple, but at this point I introduce a neologism: I started writing about splittable items as being “non‑atomic”, but the clumsiness of a word whose meaning amounts to “non‑non‑splittable” became irksome, so I have changed all those references to “tomic”.

I could not at first find that back‑formation in use anywhere else, but the term seemed to me to be convenient in this sense, so I present it here. My apologies to anyone who hates the word, but my need was the greater I think, and I am the current speaker so, when you are the author, you may choose your own terms: tomic, atomic, or a-atomic; but till then, suffer!

Since writing that, I have indeed found "tomic" and related terms used in the study of poetic scansion, referring to pauses between words, but even that does not seem to be in wide usage, and threatens no conflict with its semantics in our current entitic context, so let's proceed, tomically or otherwise.

Whether the entitic nature of any particular entity is in any way intrinsic to that perceived entity, or whether it is an effect of that entity's existence in the world, or its relationship to the world, or whether it is a vacuous mental delusion, an intellectual crutch in dealing with the world, I do not address here; but I do not know how to do without that crutch or convenience, so ...

Examples of entities, tomic or atomic, not all of them distinct, might be, say:

  • individual elementary atomic units (atomic in the sense of being primitive, not being tomic, neither physically nor logically divisible into simpler or smaller units or categories) or
  • any recognisable status, action, or quality, such as a smell, a colour, momentum, emotion, or
  • sets or structures of elementary units or entities united by particular relationships (such as adhesion, repulsion, location, or resemblance), each set regarded as an entity, for example a crowd, or a pile, or an aggregate, or
  • more generally, any relationship that unites entities into a less primitive entity, such as a membership of a constellation, a flock, or a team, or
  • relationships between entities, such as processes, events, ideas, reputations, recipes, or concepts that exist in the form of relationships between neurons, and possibly between neurons and perceived objects.
  • clearly or vaguely defined or delimited, as we see in complex structures or in clouds or impressions.
  • More generally still, in some senses the reality of an entity could include its relationship to every other entity in an observable universe, but for practical purposes, this might be too obsessive to take seriously at this point in our discussion. It would however, involve enormous complications: for example, imagine four bodies roughly equally far from each other, as seen from a central fifth body, each of the four barely within the limits of the central fifth body's observable universe, as defined by its red-shift boundary; then it would take billions of years for any event at any one body to have any effect on the other four. I do not deny that the quintet would comprise an entity in various contexts, either abstract or material, but I am unsure of the implications of the concept; suppose each of the five to comprise a civilisation on its own planet — then suppose each sends a message to each of the other four; For the rest of them the central recipient of the messages would have to play relay station, and for each message, the recipient might no longer exist when it arrives, and the message dispatcher would have no way of knowing whether the source civilisation still exists until the reply arrives, much less exists in the same form as when the message was despatched. Less dramatically, but more difficult to resolve, would be the question of whether any of the outer four would be within the other's observable universe. Such communication would at the very least lend a new dimension to the exhortation to: "Cast thy bread upon the waters: for thou shalt find it after many days."
    Very many days indeed ...

Now: in dealing with the world, and managing one's own patch of it, the concept of entities seems to my limited imagination to be unavoidable in various ways.

For instance, it has become a cliché in naïve computing or design circles, that there exists a right way to design complex structures (entities, if you like) such as programs and bridges, and that such a right way necessarily must be "top‑down". I, for one, have used the top‑down concept repeatedly throughout my career, and would go on doing so indefinitely.

But not invariably.

Roughly speaking, top‑down design amounts to conceiving the desired outcome first, only thereafter conceiving the major components and their nature and operation, and then their components in turn, stopping only when you have everything you need for the desired end‑product. This might sound silly to the uninitiated, but it really is a very powerful principle, and its sophisticated application can be effective in ways that amaze or confound a sceptic. Beginners tend to be puzzled or irritated at the disciplines of top‑down work, but beginners commonly are easily puzzled or irritated anyway, and once they eventually begin to become comfortable with top‑down, it helps them through the struggle.

However, there is a trap. It is true that top‑down commonly is more powerful and generally faster and better suited to teamwork and modularity and accommodation to future developments, especially in dealing with complex problems — but it relies on the existence of components that are familiar, well‑understood, and commonly recognised, components that one may use in the design or construction. Such components might be primitives, or they might be suitable perspectives and tools, whether primitive, simple, or complex; as entities they must be recognisable in the same form, and available in such a form, to all the relevant parties, and that is more than we generally can rely on. In working on a new material, if we rely on our familiar nails, hammers, screws and screwdrivers without examining the implications and options, the results might turn prove embarrassing, expensive, or disastrous, say if the workpiece is an expensive glass, a dangerous explosive, or a new environment.

Even in programming, top-down can lead naïve practitioners into unsuitable algorithms with unobvious inefficiencies, special case logical traps, legal liabilities ...

So, when the necessary components are unavailable or poorly understood, one commonly needs to begin by establishing the necessary primitives. Items that later prove to be of value to top‑down designs, often have been discovered independently of the application, without any idea that they might be useful beyond our immediate needs. Some we stumble upon; some originated as whims, speculations or toys; examples include computers, saws, money, flour, telephones, vaccines, rockets, radio, wheels, and antibiotics.

Such things most people see as primitives; forgetting that they once were new, and still are anything but primitive.

The top‑down bigot, before assimilating the relevance of both aspects in any intellectual or practical field, on encountering a refractory problem, is likely to degenerate into the mindset of: "If it still don't work, I gets a bigger 'ammer!" That is no more competent than the troop of monkeys climbing the biggest tree in sight, saying "Look! We are going to the moon!" That is a valid example of the top‑down approach: they see the moon alright, as well as the objective, and they see the direction to travel, but they have a lot of bottom‑up work to complete before their project is better than futile. And their final solution had better not rely on that tree.

Thinking of the intrinsic hazards of top‑down work, I am obliquely reminded of the brilliant Harris cartoon I saw decades ago in the Scientific American: 

 




 

On the other hand, with all its associated temptations to reinvention and confusion, bottom‑up discovery and design is ubiquitous in creativity. The more bottom‑up concepts and tools people master, the more powerful their options for top‑down design. After the bottom‑up creation, the new resources should be documented and made available for future use. We shouldn't be reduced to re‑inventing our familiar nails, hammers, screws, and screwdrivers for every bit of carpentry. Accordingly, for example in mechanical engineering, whole books have been published, showing thousands of previously invented mechanical movements. Designers may refer to them for either instruction or inspiration when they recognise a need in a particular project.

Such remarks might sound terribly prissy, self-justificatory, and academic, but consider an example of the practical hazards of common-sense top-down approaches in innocence: some decades ago a buried electric cable was laid across country in rural India. Suddenly the cable failed, and the power supply with it. Locating the fault in the cable proved to be unusually difficult and local residents were mystified and the technicians frustrated. Eventually did find the break in a field where the buried cable had been snapped by a plough. The ploughman had realised that the break was unacceptable, so he fixed it: he knotted the two broken ends together and re-buried the knotted part. After all, that was simple common sense; everyone knows that when a cord breaks, you fix it by knotting the ends together: see a problem, solve the problem.

And the remedy worked too! No one came to shout at the ploughman, so patently his fix had been satisfactory; no bottom-up thinking, no problem! Top-down wins again!

Such blunders are not limited to rural naïveté; so many years ago that cars still had a choke lever on the dashboard, a woman took her new car back to the garage and complained that its fuel consumption was ridiculous. The puzzled mechanic began an examination, and noticed suddenly that the choke lever was pulled all the way out; he asked why. "Oh that," said the woman, "I never use it, so I just keep it pulled out to hang my handbag on." 

In top-down terms, that was perfectly reasonable; simple decision deferral: bag gotta hang, hang it on something obviously otherwise useless, but adaptable for hanging things. That is what such hooks are for, right? After all, why else would the lever be that shape? 

Not only car maintenance and cables, but computer hardware and software war stories, abound with analogous cases.

Such examples illustrate whole classes of prerequisites for either bottom‑up or top‑down design. Too narrow a view can be fatal in various ways. The bottom‑up discovery of a way in which a couple of rocks can be persuaded either to conduct an electric current, or to interrupt it, might not immediately suggest that multi‑billion‑switch, super‑fast computers, small enough and cheap enough to be wasted on domestic devices, could be based on that principle; conversely, top‑down designers might fail to understand why climbing trees could not be the way to get to the moon, or why knotting and burying broken cables could be anything to make a fuss about.

Similar considerations apply to the conception and design of perpetual motion machines and homeopathic remedies.

Flying machines were conceived top‑down for millennia, but were rejected as impossible before the necessary developments in aerodynamics, aerostatics, mechanics and combustion engines were created, largely laboriously bottom‑up from the point of view of achieving flight.

And so it is with engineering, science and philosophy. In real life a distressingly large proportion of progress — and in particular, of wasted progress, amounts to standing on the toes of predecessors instead of standing on their shoulders. And intrinsically, climbing onto shoulders is largely bottom‑up.

One needs to balance outlook and context, and be ready to explore and explore. John Donne had the right of it, in saying that he, that will reach Truth, about must and about must go ...

Other concepts that might be considered in various contexts as primitive or nearly so, are variously defined and widely disputed, and my discussion of them here I suck out of my own thumb, and it is not to be taken as gospel. They include:

  • Information, where it means something like: whatever states distinguish the relative acceptability or adequacy of alternative hypotheses. It also might be seen as: whatever relationships between entities, affect the relative probability or outcome of alternative physical events.
  • Randomness is hard to define, and different people define it in different ways for different purposes, but for my purposes I define it here as lack of information as I have just described information. It takes at least two forms:
    • Where sufficient information to determine a state may exist (did that coin fall heads or tails? Is the cat in that box alive?) but is not available to the subject or observer, or:
    • Where information does not exist at all to determine a given question, not to any observer in any sense, and not to "nature" itself (when will that unstable nucleus decay?)
  • Probability is the degree to which one might regard any particular hypothesis concerning existing states as being stronger or weaker than another, in the light of the available or existing information. This implies that different observers, or the same observer at a different time or other different coordinates, might rationally assign different probabilities to the same set of conceivable events.

 

Science, Sociology, Substance

As an adolescent I aspired to lasting fame, I craved factual certainty, and
I thirsted for a meaningful vision of human life - so I became a scientist.
This is like becoming an archbishop so you can meet girls.
Matt Cartmill, anthropologist

It has been a long time since I craved factual certainty, partly because I am deeply sceptical of the idea that anything of the type is accessible at all. I am equally sceptical of whether the concept of factual certainty itself has any substantial meaning in our world in general, or can have meaning in brains of our human type in particular. I am confident however, though without formal proof, that I exist in a world that does exist and that does briefly include me, in whatever sense that such concepts might make sense at all; I am in fact sufficiently confident in writing this essay, to relegate to the indefinite future, impotent speculations on unobservable worlds beyond my immediate topic.

Furthermore, I am confident that this universe of my perception intrinsically comprises certain objects and certain interactions of objects, that behave in such ways as to achieve certain classes of consistency — what we might call logical behaviour. Reasons for this view, I discuss later, but only superficially.

Any list of the various views of the philosophy of science, such as positivism, empiricism, instrumentalism, materialism, or falsification, in their various forms and combinations, would exceed anything that I could afford to deal with here. Accordingly I have not even classified my own view formally, and am not even much interested in trying to do so. Life is too short.

However, one of my pet irritations is when people confuse science in terms of its subject matter, with the alleged sociology, psychology, and related views of science and scientists. I do not claim that every such topic is without interest or importance, but it is not of primary interest here; I pay little attention to the works, much less the conclusions, of the likes of Kuhn, Derrida, or Feyerabend, irrespective of some thoughts they propound among the drivel  My interest is in the nature, rationale, and material significance of scientific activity itself.

And the fun.

Insofar as they may be coherent, my philosophical views of science are along the lines of those called realist philosophy, though I am not necessarily conventional according to any recognised school of scientific realism (variations are many). By my own version of realism I mean in essence that I see the observable universe, including myself, as existing — and existing in a form reasonably consistent with the impressions we can gather from empirical evidence (evidence of our senses, as some like to call it) and logic (the evidence of our sense, insofar as we can deal with it). I do not insist that this view is correct, but anyone wishing to convince me of anything to the contrary, will need impressive powers of persuasion and argument.

To save readers speculations on my underlying. intentions and illusions, I include a few remarks on important assumptions about science. I do not justify them here, because all my thoughts dangle together, and I can't cover everything in just one essay. Nor in several others I have written elsewhere, for that matter.

Too bad.

Or maybe not so bad: here goes.

Science, Dogma, and Scientists

Philosophy is questions that may never be answered.
Religion is answers that may never be questioned."
quoted by Daniel Dennett

Definitions of "science" are varied, hackneyed, and largely uncomprehending; and the definition of "scientist" is, if anything, worse. One correspondent (impressively qualified at that) in disagreement with my view of science, quoted points of some august body's definition (Royal Society? Can't remember — doesn't make no neverminds). And yet that definition was transparent nonsense: it dealt with examples of good practice in science — proper experimental design, controls etc.

All good stuff in itself of course, but naïve in the extreme and other places; it didn’t even address the question: that of the definition of science.

More realistically, the following is closer to the point, derived from text I produced elsewhere:

Science in the sense that we are discussing, is the opposite of religion in that, far from recourse to dogma as the ultimate basis of authority for defining the fundamental assumptions, much less recognising dogma as a basis for justification of choice of action, science intrinsically has no conceptual scope at all for ideological dogma.

Science does not even deny dogma, any more than religion denies noise.

Science, or to be more precise, scientific practice, is in essence the application of a range of processes for finding and using information in constructing, identifying, urging, or selecting, the strongest candidate hypotheses to answer any reasonably meaningful question, whether deductive, inductive, or abductive, and whether in contexts that are formal, material, or applied. No appeal to dogma, in fact, no appeal to any assertion at all, whether empirical or philosophical, transient or eternal, has cogency in science in this sense, because the only means available for convincing persons who refuse to accept your arguments, is by letting them convince themselves in the light of available evidence, including any evidence that they unearth for themselves. And conversely, your adversaries' options for convincing you of their views, are equally constrained in turn. There is no guarantee that either party is more or less right.

Informality of wording apart, that is an approach to a definition of "science" in the sense of "scientific behaviour".

All that stuff about controls and Bayesian theory and Ockham’s razor, and predictive power and more, is well and good, but in itself it isn't science, just lists of components of good, effective, practice and principle. None of it promises correct or even ultimately predictive conclusion or formal proof — science isn't about formal proof any more than about authority, but rather about selecting the currently best-supported working hypothesis, while perpetually considering either improvements, or total replacements to current concepts and speculations.

As for what a scientist is, I do not regard the question as very useful; if I were to meet someone wearing a hat with big red letters saying “scientist” it would no more impress me than a hat that says “Lion Tamer” (or MAGA, for that matter).

“Scientist” might appear in your job description in some contexts, but the impression it conveys to Janet or John Average would not be very informative.

Perhaps the term “scientist” could better be justified as designating one’s vocation rather than anything else.

Occam, Ockham and all that

Essentia non sunt multiplicanda praeter necessitatem.

(Essential assumptions are not to be multiplied beyond necessity.)
                        William of Occam (attrib)

Because it is not the main point of the essay I will say little about Ockham’s razor; it is anyway a topic more honoured in its incomprehension than its application. Some claim that it is a social construct of no substance, and I am not inclined to waste my keystrokes on their naïveté. Others speak of it as in effect so much of a holy writ that to invoke it is sufficient to eliminate from serious consideration, any proposal that they dislike. That too is not worth serious attention. Yet others regard it as the basis of science in every respect. They at least can raise arguments of some substance, but I reject their assertion as too simplistic to be sound in general.

His term “essentia” is slightly confusing. I have translated it as: “essences”, or “essential assumptions”, but you might find it helpful to think in terms of “basic ideas” or something similar; or a more compact colloquial expression, such as “KISS”: “Keep It Simple, Stupid!”

For my part I regard Ockham’s principle as being healthy in the current philosophy of science, and certainly an insight of brilliance in its day. Its merit remains relevant in our time, but like many a valuable precept, it requires good sense in its application. In this requirement it resembles all worthwhile principles in life in general, and in science in particular.

And what does it apply to? Anyone (in our sense in particular, anyone with any interest in scientific principles) should bear Ockham in mind whenever seeking to understand a concept or phenomenon in terms of its associated phenomena. Are you really sure that you need to insist on this point or that, as essential to your thesis? Even if it is valid, even if it is correct, then if it can be considered separately, that is what should be done. There is more to the razor than just eliminating what is false, or even doubtful. Considering concepts in isolation is as important as considering their roles in combination. Combinatorial problems can be as misleading and expensive as outright errors, they can change the behaviour of individual items, and they may mask outright errors.

Still, even if we have isolated our issues competently, there is no single way to achieve perfect understanding, either comprehensively or uniquely simply; there always will be more to the subject of study than we can describe and comprehend, and more scope to speculate on its nature than we can in principle dismiss in terms of necessary logic. William of Ockham advised in effect: to address the difficulty by cutting it down to manageable proportions by discarding all assumptions that you can do without for the present.

Now, the best way to do this is, in my opinion, to guess at what at first seems most obvious, but bear in mind that you will be oversimplifying it in some ways, and overcomplicating it in others. No problem, don’t panic, this is what you are in the game for if you are any kind of scientist.

Suppose, one wishes to see whether a thin sheet of material such as paper, used as a bridge, can bear the weight of say, a dry utensil, such as a knife or spoon; it can barely bear its own. But if you fold it concertina-wise it can bear hundreds of times its own weight.

This illustrates the difficulty of applying the principle of the razor: there is more to it than the elimination of essential assumptions — there is the related concept of the simplicity of the essential assumptions: the modern version could be paraphrased as: “Make everything as simple as possible, but no simpler”  Attributed as is much else, to Einstein.

But simplicity is anything but simple: you can measure simplicity in terms of the number of components, which might mean that you need to increase the number or complexity of the principles invoked. Alternatively you might be able to improve the conceptual complexity by increasing the number of components.

Suppose someone is reporting the tosses of a notionally fair coin or die. You keep tossing it and the output looks pretty random, so random that you get suspicious; could it be that there is an intelligence in the coin that keeps its output apparently random? Or an intelligence that is passing out an encoded message? If so, could the first fifty million heads-or-tails really be an encoding for the full works of Shakespeare?

Yes, it certainly could, that is elementary information theory. But is it a helpful assumption? Certainly not. That sort of assumption is just what Ockham warned us against.

A less subtle challenge would be to guess the shape and numbering of the tossed object. Given numbers from 1 to 6 would suggest what? A cubic die? But there are many shapes and numberings that could give you that; a mindless adherence to Ockham would immediately leave you with the problem of defining simplicity: what is the simplest shape for a die that fairly yields numbers from 1 to 6? And what is the simplest numbering pattern? And how many fair numbering patterns are there for any given value of 6n? And what shape that could in principle give you values from say 1 to 12 or 14 if tossed often enough, but arbitrarily rarely; how often would you have to toss them before Ockham and Bayes could warn you that the assumptions you currently hold, multiply your essential assumptions insufficiently.

For example, it is possible to get a fair six-valued die with any of 6n-sided polyhedra, And most people would guess that the simplest fair hexahedron would be a cube, in which n=1. And they would be wrong; there are simpler fair hexahedra in which n still equals 1. A fair triangular hexahedron has fewer apices than a cube, and fewer edges. And what about a fair sphere, where n arguably is zero?

(And yes, I possess fair spherical dice that give unambiguous readouts!)

And where n increases in value, the polyhedra that meet the fairness requirement, increases rapidly.

Ockham was a great of his time, but his admonition needs to be applied with care and with thought, then and now.

This is clear when one examines the history of science. One begins with an idea (commonly abductive, as I mention later) and if it does not fit newly emerging observations, you may simplify it, or compound it, or discard it entirely and replace it with something new, either simpler or more complex.

Ockham spoke of multiplying assumptions unnecessarily; he offered no criticism of multiplying essential assumptions necessarily.

Practically any branch of science could be adduced to illustrate these principles, but astronomy-and-cosmology would be the most visually dramatic: Heavens rotating round the Earth, Earth-centric sun, Solar-centric planets, stars as solar-(stellar?)-systems, nebulae as clouds, galaxies as milky ways of many types,.

Ockham’s razor would have no special role in many of such surges in progress, in which many of the steps led to barely thinkable changes of viewpoint.

It certainly is true that in any particular view of a field, Ockham’s principle might be useful, even vital, but it is no substitute for basic principles of science, such as re-thinking theories when they begin to conflict with new observation or logic.

In short, Ockham’s principle is not all there is to science, especially exploratory science; his principle is primarily of parsimony, which is good, but pioneering observation, insight, abduction, are important as well, and so is explanatory richness. When proposed theories have been formulated, there is plenty of time to consider reformulating them more parsimoniously, or even rejecting them outright.

It is called research.

Magic.

If all our common‑sense notions about the universe were correct, then science
would have solved the secrets of the universe thousands of years ago.
The purpose of science is to peel back the layer of the appearance
of the objects to reveal their underlying nature. In fact,
if appearance and essence were the same thing,
there would be no need for science.
Michio Kaku

I use the term “magic” several times in this discussion, italicised to avoid confusion with more familiar contexts, so let’s clarify what I mean by it.

I do not mean what Arthur C. Clarke meant when he said that any sufficiently advanced technology was indistinguishable from magic. He did have a point of course, but that is not the point I deal with here.

From time to time in this essay, I propose more or less Socratic thought experiments in which I exclude some practical considerations, or internal contradictions, or I introduce unjustified assumptions that are incompatible with real life, or where I invoke impossible powers, always purely for the sake of illustration.

For example I might speak of doing things with something too large to fit into our observable universe, or of balancing a vertical tower of a few dozen loose snooker balls, each on the one below. It might be something mathematically describable on suitable assumptions, or it might not, but it generally would be something that is not practically possible — such as violation of thermodynamics.

To support anything of the type in the real world would need magic.

So, if I speak of “magic”, that is all there is to it: nothing to do with superstition or witchcraft or anything occult or mythology: it is pure analogy or abstraction for convenience in illustrating a principle — not proof, please note, just hand‑waving to avoid getting bogged down in unconstructive quibbling.

 

Cliché: Why is There Anything At All?

Over your head
The rigid, pure, persistent ray
Pierces the darkness like a blade,
Wherein is no thing seen
Save that the dust-motes in their millions
Eddy and play
In carols and cotillions,
Until it breaks upon the screen,
And then
Appear the shapes of driving clouds
And desperate men
Sailors in the shrouds
Of labouring ships,
Sails shaking,
Seas breaking,
Men and the sea at grips;
The empty, lifeless band of light
On unimaginable waves
Carries the terrors of the stormy night,
Dragged from their graves,
And makes to live again
The struggling men
In your sight.
Just so our earth,
"With all its striving and its stresses,
Its tears,
Its mirth,
Its loves and hates.
Riven souls, relentless fates,
Cities proud and haunted wildernesses,
Is not, as men have guessed,
Some god’s uneasy dream,
Or selfish jest,
But just the interruption of a beam.
Arnold Wall    The Cinema

Yes, why is there something rather than nothing (assuming anything at all)?

Is There Really Anything but Solipsism?

It is necessary for the very existence of science that minds exist which
do not allow that nature must satisfy some preconceived conditions.
                             Richard Feynman

Let's first deal with the idea of solipsism: that nothing exists but myself, and even then your existence or mine is only a personal delusion: essentially a nothing that vainly fancies that it exists.

Well, I can fancy myself denying or imagining the existence of gods, unicorns, and Rumpelstilzchen, in some senses, but how I can imagine my own existence unless I exist to do the imagining, defeats me. I am not a great fan of Descartes' cogito ergo sum, but it is not without point. To imagine is by definition to do something (imagining!) and in my opinion anything that at least does anything even if it does nothing more active than to lie as a doorstop and exert a gravitational effect, thereby exists; that practically amounts to the definition of existence

As definitions go, it has elements of circularity, but so does cogito ergo sum, so I move in exalted circles.

But solipsism leaves me with major unresolved questions. For example: "cogito ergo sum" itself: "I think, therefore I am" might establish my existence, but only if we concede similar assumptions about other physical realities; for example it would not deny: "You claim to think therefore you exist", and various similar aphorisms or speculations that various entities contribute to events. For that to be possible would immediately and intrinsically negate solipsism as strongly as solipsism would negate non-existence. It would mean that if that implication was valid for me and my existence, it would be equally valid for other agents and operations. Not to mention the argument of my perception of them as constituting existence.

Still not proof, but then "I think therefore I am" also is not proof; as Perliss, for example, pointed out:

"You can’t proceed from the informal to the formal by formal means".

To me it equally is not clear that one could proceed from the formal to the informal by formal means; one certainly cannot prove the completeness of a formal system from within that same formal system.

So immediately I reject solipsism: it is unpersuasive at best, vacuous, and most certainly based on an arbitrary assumption, namely that thinking implies existence; even if that were in fact a cogent assumption, it would not follow that it is the only ground one could find to support the concept of existence.

To begin with, I introduce the concept of an algebra of physics. There will be more about that later, but to avoid confusion at this point I define algebra here without explanation and without originality, as follows: an algebra, whether formal or material, consists in a set of objects or object types, plus a set of operations on objects in that set. We most familiarly think of numbers as the objects that algebra operates on, but that is not a logical requirement; there is an indefinite range of algebras, operating on an indefinite range of object types or categories of types, and numbers are just a limited subcategory.

Now suppose we assume that a universe exists, how could it be meaningful to assume that a universe with no algebra — no patterns of implications of events — could exist? Having no algebra would make it very difficult to define an event, and to eliminate the concept of an event would practically define non-existence.

And an assumption such as "I think therefore I am" is an operation of implication, and accordingly is a component of an algebra of physics. Imagination in turn is a class of thought, and as such is crudely physical, comprised of operations on information in a brain — all of such things being fundamentally physical.

And I claim that to deny that what is physical exists, is a contradiction in terms, because denial comprises physical objects, operations, and events. We may neglect the denial as being self-nullifying.

And how do I justify my claim that imagination demands and comprises information? Because there are many different alternatives to any item of imagination, You might imagine some thing, and I some item that conflicts with that thing, and again, I might imagine first one thing, and then another, different thing, and information is what it takes to determine any difference between things, whether the differences are intrinsic in nature or are external coordinates.

For example I might imagine a round bottle and a square block of ice (two things that intrinsically differ materially) or imagine a near bottle and a far bottle (differing only in coordinates) — telling such imagined objects apart is a matter of information.

Those who assume the delusionary nature of this world, commonly assert that the alternative assumption — the assumption of the actual existence of an enormously complex material universe — is invalid because it falls foul of Occam’s razor; but that view is open to disqualification from two objections: firstly Occam’s razor itself actually is more of an arbitrary assumption, a rule of thumb, a sensible practical principle, rather than a cogent disqualification of every new idea; in practice many an intellectual or technical advance does demand the introduction of new concepts. This may be come about either by splitting an existing concept into distinct concepts, or by introducing a radically new additional concept.

Truth is not to be determined by counting concepts.

Obedience to Occam’s exhortation is no more than a healthy mental habit: that of parsimony of concepts; once make an idol of it, and you find that, as with other idols, there is no breath at all in the midst of it.

Such creation or recognition of a new concept, Ockham never denied, but many who invoke his words seem to think that they automatically are justified in kicking up a fuss any time that they fancy that anyone is introducing a new concept. And, since the world is large and knowledge and language are small, new concepts continually crop up and need identification and assimilation.

But even without rejecting or constraining Ockham, the assumption that we exist, and that each aspect of the world exists, more or less as we perceive it, is less of an assumption than that every thing that we appear to perceive is not there, but just an ad hoc illusion. That illusion comprises information. Someone or some thing that creates or comprises the information is just as unlikely as the information itself, and it would introduce a go‑between just as complicated as the physical thing plus the go‑between.

Therefore the parsimonious assumption is that the world does exist, rather than that I personally imagined it. I still might be wrong, but if so, I propose that my view falls down mainly in my inability to identify what, later in this essay, I shall refer to as the bottom turtle, the truly basic assumptions that I am unequipped to identify or characterise.

And as far as I can tell, our world is a world of implication and consequence — but it does not follow that it is a world of complete or perfect information. More about that later.

Why "Why?"? Well, if Anything: "Because why".

Trying to answer rhetorical questions instead of being cowed by them
is a good habit to cultivate.
                             Daniel C. Dennett

What I write here will not delve deeply into the question of why things exist, firstly because the way that question is asked usually amounts to a non‑question. In fact it usually turns out to be an elaborate exercise in question begging.

The question itself implies preconceptions of many sorts, such as the meaning of: "is", or "existence", or "concept", or "event", and first and perhaps most of all, to speak of "why".

The very idea of existence is troublesome; I take it for granted here, because the question itself takes it for granted. In a later section of this essay I discuss existence, and establish it as a useful concept in context, but it is too long for this section, so let it wait.

As for "why" ...!

For years I have regarded "why" — and the associated "because" — as being among the most treacherous words in human languages — in various forms their various senses and meanings differ radically, and people confuse them, often so badly as to make no sense at all.

In one sense the sorts of things people mean when they say "why" or "because" commonly have to do with the way things happen: "this might be expected to cause that, or imply that, or this did cause or imply that, or this frequently or always will cause that, or be followed by that". Things tend to happen according to what we might call the logic or algebra of the universe, and, in particular senses, words like "why" and "because" deal with that assumption that things happen because the reigning rules of the world imply them, whether in the past, present, or future. In fact in some languages the word "why" literally means something like "how come" or "for what"?

In such terms, "why" and "because" (or, if you prefer, "wherefore" and "therefore" or "caused by" or something similar) have to do with the logical operations: implication and inference, or if you prefer, consequence, or deduction. And those are among the most fundamental logical operations, both in formal logic and in dealing variously with what we see as reality. In logic or mathematics as opposed to physics, "why" and "because" refer to how axioms lead to the theorems or conclusions that follow from them — how they imply them — and from which axioms, assumptions, and arguments a conclusion might have followed.

And yet, that difference is artificial: in physical science, we empirically find that the world wags in a particular way: that events lead to other events when entities interact, and that the interaction is not in all cases purely arbitrary, but partly predictable, at least in principle and according to rule. Based on that finding, we might be willing to say why a billiard ball might or might not fall into a particular pocket, or if it already is in the pocket, in how many ways it might have got there, and in which other pockets it might have landed instead, or why we do not believe that the ball appeared as a pigeon that landed on the cloth, turned into a billiard ball and rolled into the hole.

From that point of view, we argue that the universe behaves according to certain rules — not a book of rules that anyone promulgated, but rules that reflect the way things consistently happen and cause other things to happen.

In short, such is the way that events and states follow other events and states. In our universe the algebra does not facilitate the conversion of pigeons landing on billiard tables and turning into billiard balls. It may not, it usually does not, predict that exactly the same thing will follow the same causes in exactly the same way every time, but the ways events imply other events commonly are fairly consistent, and sometimes are highly precise.

Why do we argue that way? Well, for one thing, it is hard to imagine a universe without constraints on what sort of events follow particular precursors. In fact I suspect that, given the way any universe might work, the concept of an empty universe, or the non-existence of any universe anywhere, or the existence of any universe that has no particular behaviour patterns, somehow leads to some internal inconsistency. Don't ask me to justify that suspicion. And don't ask me to imagine what sort of fundamental primitives could be the basis of an internally inconsistent universe: I boggle.

Anyway, later I shall discuss the concept of an algebra of physics in more detail.

The rules of physics are at least as constraining in most formal disciplines as they are in empirical studies. For example if we are told that the result of an unspecified calculation (boringly) is 211 in decimal notation, then we can find all sorts of calculations that could have given that answer.

We also, arguably more importantly, could find all sorts of calculations that we know could not have given that answer: say, multiplication or addition of any two prime numbers. (Bear in mind: no matter what logic you prefer, in current number theory, the number 1 is defined as not being a prime! This is because each prime has precisely two distinct divisors, whereas 1 has only one divisor: 1. And that is not someone's idle whim; it has important consequences. I do not discuss it here, but if you doubt me, read up on the Fundamental Theorem of Arithmetic).

For a more crudely physical example, if we are told that two isolated bodies in free fall in space, in mutually stationary circular orbit around each other, will remain in that orbit until they are disturbed from outside. Again, that will not tell us how they came to be in that orbit, but we can identify all sorts of things that could not have given rise to the situation.

So far so tedious, but important.

People often speak of those ways in which things happen, or apparently tend to happen, as “laws” of “science” and similar names. Such terms are harmless, as long as one does not confuse the patterns of events with laws in the sense of human laws. The two concepts have little to do with each other. One might as well speak of "laws of arithmetic" being something to do with human legal systems. In fact, if that use of the word "laws" means anything at all, it really means something more like: "the way things happen when one applies the various operations defined in that arithmetic".

The "laws" also have not much to do with science: science does not make rules, though scientific work might lead to the discovery of rules, patterns of events, and, at some level, to developing some idea of why and how some of those rules apply.

As it happens, there is yet another aspect to our real world, something that many people do not realise; in fact many would categorically reject the idea, although it is inescapable and very important:

Sometimes there is no “why nor any "because".

In other words, as I shall point out, some things do happen randomly in some respects — truly, inescapably, randomly.

As I use the term “randomly” in this connection, that means not just that we ourselves happen to lack the necessary information to guess why one particular thing should happen rather than another; it also means that no information in the universe exists, that determines that that particular thing should happen rather than some other; at least not until after the event has happened, and often not even then.

Such missing information physically does not exist — not for you, not for me, not for Schrödinger's cat, not at all. Also, it means that if there is some such information, but not enough to determine the outcome absolutely, then such non‑definitive information favours some notionally possible outcomes rather than some others. This means that such information makes those outcomes more probable: “more probable” means that such information as does exist makes the more probable outcomes occur more frequently if that class of event is repeated indefinitely.

How far the principle of non‑existent information applies to the tossing of a fair coin or a die, such that it comes to rest more often on a face than on its edge, I cannot say, but it certainly would apply to the fact that there is a greater probability that any particular isolated atom of an isotope of uranium will spontaneously undergo alpha decay rather than that it will split into say, barium plus krypton plus a job lot of neutrons.

With due respect to Einstein, and due credit to Hawking: God most assuredly does play dice, and does so on a scale that numbs the mind. Of that more later, but for now we can ignore the point, though we shall have to return to it.

As I see it, those two types of questions, logical and physical, do not differ in essence, because I am of the opinion that mathematics is a branch of physics, and not vice versa. (Should I prove to have been wrong in this, the two types, logical and physical, still need not differ fundamentally, but let that go for now.) I accept and affirm that information is in at least some real, literal senses physical.

Mathematics, I repeat, is a branch of physics: without physics there can be no information, no implication (as opposed to determinism), no relationships, and therefore no logical operations. To eliminate all such things would at a stroke eliminate all forms of mathematics, whether applied or purely formal, whether meaningful or just meaningless noise. Relationships and logical consequences just could not apply, let alone exist in any sense, without at least the physics of information, and conversely, given physics, or any conceptually related system in any imaginary universe, the possible mathematics pops out as a consequence, an abstraction, of the way things are.

Accordingly, even the most purely formal mathematics is real and material, just as the physical observations are real: it follows that there are no points or lines; and identical bosons can share each other’s location, whereas identical fermions cannot.

These things follow from the nature of the objects in our universe, plus the operations that they can undergo.

A universe in which no events could happen, even in principle, would be hard to define, let alone imagine, and one in which events do not affect or determine each other, whether rigidly or not, would be no better. On the other hand, a universe in which events arise as plesiomorphically consistent interactions between entities, would necessarily imply causal behaviour: formal operations on objects, with consequences in line with their probable outcomes. Those operations and objects would simply be aspects of the physical algebra of that universe.

By way of naïve example, to imagine a non-trivial universe without those implications, would make as much sense as expecting water not to splash when poured; whatever happens in a universe is part of the consequence of the algebra of that universe. And the formal and material nature of the algebra and behaviour of the universe are implicitly inseparable; each is part of the same thing. To ask that three plus three not formally make six, and vice versa, is to ask that six eggs not make half a dozen eggs.

Go ahead and try to imagine such a universe; you will find yourself in difficulties because you are trying to do it with a brain and an algebra constructed and operating according to the algebra of our current universe.

Of course, some metaphysicists might deny that any physical "realities" "exist", but if that were correct, then metaphysicists could not exist either, so what could the maunderings of nonexistent metaphysicists matter?

Furthermore no mathematical or logical state or assertion can exist meaningfully and no operation, formal or otherwise, can be performed on any entity, without constraints imposed by information — information that in turn cannot exist without mass/energy/space/time and all the things that, in one form or another, in one way or another, in one combination or another, make up our basic existence as far as we can tell, and do so according to that algebra.

Consider: in principle any formal statement or proposition can be manifested mechanically or materially (for example, by writing it in ink, shouting it into a void, chiselling it into tablets of stone, typing it into a mechanical calculator, constructing a computer to model such a statement, or impressing it into a brain) and, in principle, any such mechanical representation in turn can be formally described at least at one further level of abstraction. Also therefore, in principle, any mechanical system can be mechanically abstracted or duplicated at least at one more level by some other mechanical system. In each case, some imperfect representation, some plesiomorphism, is involved. (I discuss the term plesiomorphism in a coming section; pending that, it will do to think of it as similar to "isomorphism".)

Conversely, nothing of any of those types can occur or persist or be stored or transmitted without physical media. Information generally is exponentially smaller than the material system of which it constrains the states, but without the material and its states, it is nothing. No material, no information, and without either: neither and nothing!

Purely formal operations, such as binary or other Boolean implication a→b, meaning a<b, should not be confused with logical consequence. Formal operation deals with truth values, not with the states, events, and objects that might have encoded or transmitted or embodied the relevant truth values or their processing, and the fact that two values are consistent with an implication-relationship has nothing to do with cause or consequence.

Operation à la mode

To deal with the relationships and nature of the objects on which the operations of an algebra may be performed, we need an extra concept; that of modal operations, modal logics, modal algebras, that can take the natures of the objects into account. The concept is enormously important, but too large for me to do more than to recognise in the context of this essay.

Without due consideration of modal concepts equality of measures, such as cardinal numbers or truth values, does not mean identity of the properties of the populations subjected to the operations of comparison. That would be analogous to arguing that four apples are identical to the number 4, so that four bricks would be as good as four apples for making apple pie.

Similarly, implies-and-is-implied-by relationships between truth values does mean quantitative equality of the truth values, but does not mean identity of the values derived (the number 1 applied to drops of water, does not behave like 1 applied to probabilities or ball‑bearings.) and 1 ball‑bearing is not identical to 1 thousand events or 1 event or 1 probability.

Such modal considerations can be very profound or very obvious, but in either case are very important and not safe to neglect. They generally include assumptions that are context-dependent, such as being related to time, ethics, mathematics and many more.

As for what information fundamentally is: I regard information in any given form or context, as being any physical state that distinguishes alternatively possible, perhaps hypothetical, states or other entities from each other.

Pardon the vagueness of my terminology, but I am unsure that I have the vocabulary to put it more plainly. In fact I am not sure that humanity has yet defined such a vocabulary, let alone generally accepted and comprehended one.

The Stubbornness of Underdetermination.

A scientist seeks the truth, wherever that may lead. A believer already knows the truth,
and cannot be swayed no matter how compelling the evidence.
Anonymous

Wherever we cannot definitively and uniquely assign the origins and causes of observed states or events, we say that origin or causes are underdetermined, meaning that more information would be necessary to distinguish some alternatively possible causes or origins, or to associate observed states with whatever had led to them.

Suppose for example that I saw a coin tumbling down, a coin that I accept to be fair, and I watch it in due course settling flat on what I take to be a fair plane surface. Suppose that from my position I could tell the location of the coin, and that it landed flat, but that I could not see sufficient detail to decide whether it shows Heads or Tails, as seen from my side of the plane. If so, I still would need enough extra information to tell the toss.

In principle, one extra bit (binary digit) of information could suffice. I already have a great deal of information, such as that the coin did not settle on its edge, nor shatter on landing, nor land somewhere else, but my information still is not complete, and I propose that I never could have complete information about any material situation. All the same, such an extra bit of information already is enough to limit the item of usual interest either to Heads or to Tails. Had we tossed an octahedral die instead of a coin, three bits (equivalent to one octal digit) would have been necessary to say which face was up. And a fair hendecimal die would require one digit to the base 11 (nearly 3.4595 bits), to express which of its 11 faces had come up.

(Challenge: design such a die.)

A more fundamental example of underdetermination is: you find an ordinary six‑sided die lying with one face up — how many alternative ways could it have got into that situation? Was it tossed? Dropped accidentally?  Carefully placed? Shaken in a cup and thrown? How many times did it bounce, and in which directions? Was it created supernaturally and left there in that position a moment before we looked? Formed from a scrap of meteoric material that happened to come down in that shape?

Not all those explanations and speculations are equally likely, but in principle all those and more are possible. And there are indefinitely many subtler variations on each possibility; suppose for example that you had good evidence that the die had been tossed from a cup — would that determine the colour of the cup or how many times the cup had been shaken or with the left hand or the right? And how relevant would each possible variation be, to which face was on top?

Those variables might or might not affect the position of the die, or the number appearing on top, but every one might be important in some other connection: elsewhere in this essay I illustrate that there is no logical limit to how small an event might have indefinitely large consequences in our universe.

From at least as early as the time of Newton, and as late as the time of Laplace, the dominant view of physicists and philosophers of science (irrespective of any religious or fatalistic view) was roughly that every physical situation was rigidly determined by what had gone before, and equally rigidly in turn had determined its consequent events. That view we call determinism. It still is popular among people who have not studied the realities of the matter deeply enough. However, in principle it could have been faulted even at the height of its popularity, and when quantum theory became established in the early twentieth century, determinism was essentially dismissed as a principle in physical reality. This did not invalidate the concept of causality, but that is something subtly different, though importantly different.

Anyway, such concepts concern underdetermination: something occurs or is observed with greater confidence than the confidence with which one can assign a specific cause or explanation. Note however, that underdetermination as a principle is not limited to past causes, but applies to future effects as well; as I point out in this essay, both quantum mechanics and classical physics imply underdetermination of predictions, as well as of past causes of events.

I suspect, but am uncertain, that underdetermination of future and underdetermination of past events arises from similar principles. However, that does not imply, as Laplace suggested, that if we somehow magically turned time, or the course of events, back to front, like running a cinema film show back to front, we would see everything running exactly in reverse; all sorts of things are unfriendly to that idea, and our universe is in many ways unlike a film show of deterministically successive frames that can run backwards as well as forwards.

For example imagine setting a vacuum cleaner to blow, and using the blower to clear a sprinkling of sand off an area of floor. Once you have blown a strip of sandy floor clear of sand, you stop, connect the nozzle to the suction end, and see whether the suction will bring the sand back to where it had been before.

It doesn’t even begin to work, does it? The distant sand doesn’t stir, and the closer sand beside the clear strip, vanishes up the nozzle.

This is one aspect of some very important principles that Ilya Prigogine clarified: principles that prevent time from running in reverse. Nor, if time were to run in reverse, whatever that might mean, would the world be indistinguishable from time running forwards; there would be no sudden ability to use sucking to reverse what blowing had done, or for a toppled needle to stand erect where it had been balanced on its point, or for the shards of a broken window to come together if we threw the ball again backwards.

There are whole categories of effects that, even if they do not absolutely forbid the undoing of events by running all the particles involved in reverse, would require magic to do so. The vacuum cleaner is a good example: in blowing the sand, the air blows in something like a narrow stream, so that the force of its blast decreases not much more than linearly for a fair distance, so, held steady for long enough, it can blow the sand quite far. However, when it sucks, its suction swallows air from almost a complete sphere of air, but only near to the nozzle, so that the strength of its suck decreases roughly with the square of the distance from the nozzle. For practical purposes one could forget about reversing the effect of blowing, by changing to suction instead.

And pointing out that in either situation, the elementary particles involved would be obeying the same rules irrespective of their direction, cuts no mustard. Their immediate effect on each other is mainly short range — almost unimaginably short range when we think of a proton bouncing off a neutron, whereas a blast of air molecules or the turbulence of a flowing fluid takes effect over millions of millions of times greater distances.

There is a good deal more to that topic, but that will do for now. As I see it, in such terms information, or the lack of information, in its role in determining or underdetermining events, is about as close to fundamental as anything can be in real life; always assuming that real life really is real in some relevant sense.

Which in my real opinion it really is — in many ways at least.

Whether there is anything still more fundamental that determines the nature and history of our physical, empirically observed, universe, whether it all comes down to quantum entanglement or any similar principle, I have no idea and I cannot imagine how to discuss such a matter in non‑circular terms.

But be that as it may, the concept of what sort of answer to give to a "why" question is not always the same as the concept of “law of nature”.

"Why" also refers to questions of justification of actions, of how we base them on motivation and justification in terms of personal values, of opinions, of rationale. "Why did he do it?" or "Why should I do it?"

More trivially it can amount to temporisation, such as in: "Why, I think I can ...".

And more, according to taste. But the main point is that we must distinguish between those meanings, not just assume that the idea of "why" is obvious; otherwise we never have any coherent meaning for "why" questions or "because" answers. It follows that the question of why anything exists at all, reduces to confusion because the asker rarely has worked out what the question means, nor what it could mean, if anything, nor what sort of answer could settle such a question. And if he has worked it out, then he needs to express the question in answerable terms. As it stands it is not answerable.

In the 1950s Robert Sheckley wrote a brilliant short story called “Ask a Foolish Question”, a story that in my opinion every would-be philosopher of science should read with care. It is available online. I recommend the story to anyone who doubts my reservations on “Why is there Something Rather Than Nothing”.

Elsewhere I discuss such matters from other points of view, but for now I do little more than to note the point that it is conceivable that the concept of nothing, or a universe of nothing, a null‑universe, if you like, might prove to entail some internal inconsistency, so that the very idea of there being nothing, in that nothing exists, instead of something, would be meaningless.

But such fields are treacherous at best. Martin Gardener quoted Bas van Fraassen’s pretty quip: "The fool hath said in his heart that there is no null set. But if that were so, then the set of all such sets would be empty, and hence, it would be the null set. Q.E.D."

And, as Kipling said: "there was a great deal more in that than you would think."

Or possibly less.

Accordingly I ignore such questions as a rule, but some of their forms do arise in the following discussion, so I try at least to dispose of them first even if I cannot answer them meaningfully. And the first step is to establish the position from where we start. At times I myself use the word "why", but then I do try to make it clear what the sense is.

But for the present, I just accept that there something other than nothing actually exists. A sort of lazy man’s axiom, or, more properly, assumption. That question is not the intended topic of this essay.

But thinking about it, at least a little, can save a lot of head bumping.

So, if my assumption, lazy or not, is wrong, make the most of it.

That assumption entails some strong suspicions, even if it does not formally provide an answer to the question. In particular, it suggests that if something does actually exist, seemingly at least as part of a universe, then some subsets or component structures seem to exist within that universe. Without components, how could you have a universe with any content that is not the whole universe? And it suggests that for any component to exist as experienced by other components (components such as ourselves, or atoms, or stars) then their existence must mean that components in their various combinations cause events by limiting the forms of their actions or interactions.

For example, existing entities could interact by such principles as certain classes of entities not occupying all the same coordinates at once, or they could interact by attracting each other gravitationally; and when they do interact, there are outcomes that differ from what the outcomes would have been if there had been no interactions. That is one view of what existence means, if anything at all.

And such a meaning has crucial implications for the concepts of entities, events and causes.

Of which I might say more when we encounter them, from time to time.

 

Semiotics, Language, Meaning, Comprehension

Semiotics is in principle the discipline studying everything that can be used in order to lie.
If something cannot be used to tell a lie, it conversely cannot be used to tell the truth:
it cannot in fact be used ‘to tell’ at all.
I think that the definition of a ‘theory of the lie’ should be taken as
a pretty comprehensive program for a general semiotics.
. . . . . . . .  Umberto Eco

If I had any sense I might have omitted this section, but it might help in justification of why I did not omit the whole document.

Semiotics is one of those simple terms that covers such wide fields of concepts that we cannot define them simply. The subject at one time was regarded as too recondite to be of interest, until in recent decades it was adopted by pretentious authors, critics of arts and politics, who collectively diluted it nearly to meaninglessness. I do not pretend even to define semiotics properly, but hope to put a few important items into perspective. For anyone unfamiliar with the field to get a proper understanding, I recommend books by genuine semioticians, such as Umberto Eco, who wrote the readable "A Theory of Semiotics". There also are valuable Internet articles in Wikipedia and the Stanford Encyclopaedia of Philosophy.

Here however, I hardly more than illustrate a few concepts relevant to this text. Semiotics at its most essential, has to do with information, and information is fundamental to my discussion.

Semiotics in particular, deals with communication, the signals, signs, words, tokens or pictures, the ways they function or are used, and the ways in which they affect the users or subjects that play roles in communication.

That is a broad field, with more topics than most people realise, and here I deal mainly with three classes of subject that readers might do well to bear in mind in making sense of this document. Many books have been published on each of them:

·       Semantics: deals with the relationships between signs and their meanings. When people argue about the implications of attaching different meanings to the same word, or the same meaning to different words, the problems that arise are largely semantic. It accordingly is important to be sure that in any discussion the participants share the same semantics. For example, if someone who is thinking of his smallholding as a farm, gets into a farming discussion with a rancher, and neither realises their differences, things may go badly wrong. This is such a common class of problem, that innocents, especially political bigots, who have trapped themselves in logical blunders, blame "semantics", thinking that the word is simply a fancy way of saying "quibbles"; that itself is a semantic error, an error that reveals the perpetrator’s lack of education.

  • Syntax: deals with the relationship between signs in the same message. This takes many forms, both in similar messages of the same form and in different forms. In a given language there might be grammatical differences for the same word in case, voice, and the like, or in the sequence of words. Compare:
    • "Him she doesn't like." with "She doesn't like him." They have largely the same meaning, though the subtexts may differ.
    • "Bob likes Alice." with " Alice likes Bob." They do not necessarily contradict each other, but do not mean the same thing at all.
    • “Stand)?doggerel floats am” does not clearly mean anything because it is not obvious that any part of the message has any functional relationship with any other, unless you accept that the words are correctly spelt.

Such games can be elaborated indefinitely, but syntax in various media, such as in different languages, whether written, spoken, gestured, is important not only in understanding, but in efficiency and reliability. It overlaps semantics in more ways than are immediately obvious.

Syntax is intrinsic to mathematical notations, as much as in verbal languages; for example, in common infix notation, the expression:
.  (a+b)
×(c-d)
has about the same meaning as the postfix notation:
.   ab+cd-
×
The difference is essentially in the syntax.

As an example of the role of syntax in semiotics, consider the kerfuffle that appeared in the magazine “Popular Mechanics” for 2019, July 31st. It included a problem that they said drove their “entire staff insane”. I saw it only recently, and it is either very pretty or very ugly, depending on how you see it. The difficulty arose because at least the majority of those challenged did not recognise the fact that it was not a problem in mathematics, but of notation, and therefore of syntax. In essence the problem was to evaluate the expression:

                           8÷2(2+2).

If you have never seen the problem before, you might like to evaluate it yourself before reading on. Apparently they got answers ranging from one to sixteen.
Stop now to think it out if you are interested enough; this paragraph is in effect a

s
p
o
i
l
e
r

p
r
e
v
e
n
t
a
t
i
v
e.

  • If you have not yet peeped and wish to think it over, now is the time, else carry on.
    The fact is that the problem is one of semiotics rather than mathematics, and in particular, a problem of notation, that is in this case to say: of syntax.

    The syntax in question determines the sequence of operations and thereby the answer. But the syntax depends on the context: specifically the notation, the convention adopted by the interpreter. And that notation is not logically defined; in fact, in most contexts the input string is simply wrong, and as such, meaningless. It certainly is meaningless as reverse Polish notation, and would be bounced by any calculator I have seen and any computer language I have used, though it is in principle possible to write a forgiving compiler or interpreter that would interpret it consistently in the same sense as school arithmetic.
    Three of the most obvious answers, each of which could be based on a consistent syntax, would be 1, 10, and 16. Most compilers would simply baulk at the expression as given, and some would give different answers anyway.
    So, who would be right?

    Trivial.

    The right notation and interpretation would depend on the semantics. To argue that point would be like arguing whether someone speaking German, rather than someone speaking Polish, was speaking properly — when addressing someone whose language happened to be English.
    Most computer languages use infix notation with explicit operations; so, to get them to accept, and correctly interpret, your input, you would have to correct it to either
    • 8÷2*2+2 or
    • 8÷22+2 or
    • 8÷2*(2+2) or
    • 8÷(2*(2+2))
  • And each would give you consistent respective answers, though not all would give you the same answer to the same instruction. None is wrong unless it is not in the required notation, and none is right unless it is in whichever is the notation, in semiotic terms the syntax, required by the relevant system.

 

  • Pragmatics: deals with the relationship between the message and its effects on the participants in the communication. Pragmatics might be affected by choice of words, of voice, of topic; think of ideas such as "damning with faint praise", of telling the wrong joke in a given company or on a given occasion. Think of tact; as Ernest Bramah pointed out: "Although there exist many thousand subjects for elegant conversation, there are persons who cannot meet a cripple without talking about feet."

    That would be a palpable blunder in pragmatics.

In these connections language is the system of signs and messages you use; it can be in many forms, such as spoken words, signalled words, coded words, gestured words, technical words, jargon words, and, always in suitable contexts, sentences, strings and structures.

In such senses language and notation can include symbolic statements such as:

"(a,b,c,i,j Î ℕ)&(a=ij & b=i2-j2 & c= i2 + j2)(a2+b2=c2)

That is equivalent to a verbal description of what determines a Pythagorean triple.

The relationship between an algebra and a language is very close, as you may see later in this document.

Language is fundamental to most of what we commonly call communication, including what we might call selective communication, in which we intend the exclusion of some communicators from some of the messages. Examples of such exclusion include enemy communicators, "outsiders", and "not before the children".

Think of the two women who had come to visit a friend, who left them in the front room with her daughter while she went to prepare tea. The daughter of six or eight years old was no beauty, and one of the visitors spelt out to the other: "Not very p-r-e-t-t-y!"

"No," said the little girl, "but very i-n-t-e-l-l-i-g-e-n-t."

I leave you to think of what such things have to do with pragmatics, and how they depend on meaning and comprehension.

And as for the meanings of "meaning" itself, they still are open to discussion, and there are whole books on the topic — and even on "The Meaning of Meaning". In this document I generally use the idea mainly in the semantic sense of the relationships between a symbol or statement and the entity that it refers to. There are other usages that are of little use to most people, but in case you wish to go into more detail, there are good articles online; a good place to start is in Wikipedia, under: "Meaning (philosophy)".

Comprehension may be seen in suitable contexts, as the relationship between the receiver of information and how that information affects the receiver.

So what is there about that entire topic, that is relevant to us here?

Mostly, that every single item in it had to do with information, receiving it, formatting it, processing it, acting on it, and propagating it.

It might seem all terribly superficial, but don't bother to tell me about it before you have assimilated and comprehended the following parable; its origins are obscure, but it has been attributed apocryphally to Einstein:

A blind man and his friend were walking on a hot day, when the friend said:

"I wish I had a nice glass of cold milk!"

"Hm? What is that?  'Glass', I know; 'cold' I know, but 'milk'?"

"Milk? Surely you know milk? A white fluid!"

"'Fluid' I understand yes, but what is 'white'?"

"White? It is an attribute of the feathers of a swan."

"'Feathers' I know from pillows, but what is a 'swan'?"

"A swan is a bird with a long, curved neck."

"A 'neck' I know; I can feel my own, but what is 'curved'?"

"Curved? 'Curved' is like 'bent'; give me your hand, stretch it out, feel that: your arm is straight! Now I bend it; feel that: it is curved, like a swan's neck."

"Aaah! Now I know what 'milk' is!"

I have read it described, that when that story was related at a particularly high-powered conference on conveying mathematical ideas, the audience sat silent for some time, till one of the biggest names present erupted with: "But what the expurgated imprecation does that mean?"

I cannot but sympathise, and recommend that readers stop here and think for a while about whether it means anything worth thinking about.

A different, but related, aspect of perception and interpretation, emerges from the oriental parable of the blind men and the elephant: one of them felt the tail and said that an elephant was like a brush; one felt the leg and said no, an elephant was like a tree; one felt the belly and said it was like a roof; and one felt the trunk and said an elephant was like a snake.

I first got those ideas from a school friend who had read them somewhere, at which time I glossed over the implications. Since then I have come to regret that I had not thought it over more seriously at the time.

What bothered me was not so much querying the partial understandings of the men examining aspects of the elephant, but trying to imagine what sort of understanding of milk the blind man could imagine from such vague and indirect analogies. Close your eyes and envisage milk if you can, in terms of necks, birds, elbows and feathers.

That reservation remains with me today, decades later, reminding me of more famous parables. Consider for example:

" ...without a parable spake he not unto them:. That it might be fulfilled which was spoken by the prophet, saying, I will open my mouth in parables; I will utter things which have been kept secret from the foundation of the world."

Why there should be virtue in avoiding clear speech when clear speech is possible, I cannot guess, but when no clear demonstration is available, we must make do with what we can. Consider our understanding of the world around us: some people say that they have too much common sense to believe anything but what they see with their own eyes — but, for the following reasons, they are no better off fundamentally, than the blind friend with the abstract, attenuated conception of milk in terms of feathers and curves.

The logical parallel is uncomfortably disconcerting.

Firstly such people clearly don't understand that vision itself is a complicated pathway, incompletely understood even now, where the available light, the refraction of the medium, the shapes and constitutions of several media in the eye itself, affect the image on the retina, the way the retina registers the image, and begins the process of image processing, the way neurons on the way to the brain cross over, and pass on the data to the proper parts of the brain, and the way that all those stages can be fooled into producing optic illusions.

And that is just the start. When we look at ever smaller items, we soon are unable to make out anything without lenses, and after that, microscopes. And by now, we no longer can trust light itself, but must massage it drastically to see more, having recourse to UV and X-rays, and after that to electrons.

Not far beyond that, and we have resort to accelerators and advanced mathematics to understand sub-atomic realities.

So, how true to any underlying realities are our perceptions, let alone our comprehension, of apples in our hands, cells in our bodies, of molecules, of atoms, electrons, protons, quarks, or neutrinos, when we have to see or even conceive them in terms of light passing through lenses in our eyes, of stresses in our tissues,. of impulses in our neurons, of wires and springs in our instruments, or marks on our rulers?

Long before we go as far as that, we are beyond our blind friend's attenuated view of the meaning of milk. It is not for nothing that Richard Feynman, said: "If you think you understand quantum mechanics, you don't understand quantum mechanics".

And if you think you understand the world in terms of what you see with your own eyes, you don't understand the world or your own vision, any more than a believer in the flatness of the Earth does.

Or perhaps any more than the blind friend understands milk.

As I see it, it amounts to a more sophisticated, less laboured, version of Plato's cave, and I prefer it greatly.

We need not despair; it is better, or at least more worthy, more effective, to work on the hypotheses that we derive from the information available to us or our perceptions; they are not random guesses as some ignorant anti-scientists claim — each new hypothesis must in the first place correspond closely enough to what we already have seen to happen; it then must enable us to make better sense and better predictions about what we still are trying to explain or discover.

At this point it may be good to remember that Edward Teller quote: "What is called understanding is often no more than a state where one has become familiar with what one does not understand" ...

And if you think we are in a bad way to understand our world, see how well the lion, the antelope, and the grass manage, with no frustrating thoughts of understanding swans and milk, but an impressive accommodation to the world as they see it.

But the more we progress, the narrower our scope for dealing with error. If you doubt this, compare your lifestyle to lifestyles of affluent people two centuries ago; then compare those with the lifestyles of one or two millennia ago. The latter two differ less than yours differs from either.

But be cautious in your comparisons. We all are limited in taking things for granted. A colleague in our computer team related a conversation that arose from the news of the discovery of the wreck of the Titanic in 1985. Someone created a pause in the discussion by asking in horror: "Why didn't they send out Boeings to rescue the people?" After a dumbfounded silence, someone said: "That was in 1912!"

The response was: "So what is your point?"

Making sense of historical contexts can be challenging; failing to do so can be disastrous as well as ignominious.

As for our comprehension of our world, perhaps someday we will get to the point where we understand more of what at present we see as mysteries, or as working hypotheses. Examples include aspects of subjective consciousness, or of quantum mechanics and relativity — but perhaps we always will have to accept the swan's neck and feathers as representing the true nature of milk.

And if so, we must take whatever parable we have, whatever working model or hypothesis, however blind, as a valid analogy to fact. One does what one can with such information as one can get, whether about milk or about elephants, by such channels as we have access to. And one mark of a civilised education is that one's views change as one's information changes.

And all considered, it works amazingly well, increasingly well, lifetime after lifetime. We call that progress, though it is not clear what progress amounts to when education lags too far behind. Do not rely too unthinkingly on anyone sending Boeings to rescue you.

 

Fundamentals: Choose your turtle

"I've got a better theory," said the little old lady,
"We live on a crust of earth on the back of a giant turtle."

"If your theory is correct, madam, what does this turtle stand on?"

"The first turtle stands on the back of a second, far larger, turtle."

"But what does this second turtle stand on?"

The little old lady crowed triumphantly, "It's no use, Mr. James — 
 it's turtles all the way down!".
          after John Robert Ross:     Constraints on Variables in Syntax

The reason that I try to deal with questions concerning the why and how of our existence, is that there is a logical and practical difficulty that I cannot dismiss: in accounting for the nature, origin, and history of our universe or universes, no one has shown me how far down we need to extend our stack of turtles.

The question of ultimate causes and ultimate explanations amounts to part of what is necessary for establishing key primitive concepts, as I already have defined the idea of primitives. Religion offers no help, for obvious reasons: most religious answers amount to selecting a particular turtle in the stack (call him "Good Ol' Dick", if you like) and asserting that he doesn't need anything to rest on. All the turtles above him do though, because the very idea of a turtle with nothing to rest on is absurd.

Alternatively one could follow the stack all the way to solipsism, as I have described it: the assertion that your own mind is the only reality and that the apparent world is no more your mind's dream: no turtles required. In effect, you accept your mind as the only turtle, and the stack as a dream; probably a fevered dream at that. No more primitive concept would be necessary, I think.

But, as I see it, such solipsism comes at too high a price and offers too little predictive or explanatory substance to be worth considering.

Personally I stop far short of solipsism. I do not try to find a final turtle, nor to assert any infinite regress of turtles all the way down. Instead I assume without proof but as an empirical basis for discussion, that our current state of sceptical, hypothetical, experimental, inductive, abductive, deductive science is a good practical start to learning indefinite amounts about the universe in which we find ourselves — or in which it seems to me that we find ourselves.

I say more about de-, in-, and abduction shortly; if the terms are unfamiliar, just think "common sense", and you will be pretty close. Or if you like to put it that way, I deal with guesses, evidence, and reason, so far as I can: guess, grope, gauge, and accommodate — or perhaps rationalise.

In short, I do not deal with the origins of origins, but with the most suggestive and persuasive aspects of empirical appearances — what I seem to see about me.

Which leaves a lot of scope for error, whether that error matters or not.

Such abductive guessing and groping may yield insights into basic questions, or may not, but it is better than floundering indefinitely for lack of the courage to think for yourself. Even wrong assumptions give us something to start from: a hypothesis that we can adjust as we learn more. Abductive approaches also are appropriate to this essay, which I try to make largely pragmatic. Not that I take anybody's theory of formal pragmatism for granted, but I do reject the impotence arising either from mysticism, or from demands that we start out from a formal demonstration of the empirical basis and nature of the universe.

 


Fundamentals: Axioms and Assumptions

The sciences do not try to explain, they hardly even try to interpret, they mainly make models.
By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of
such a mathematical construct is solely and precisely that it is expected to work.
                        John von Neumann


A popular ideal, common especially to many schools of philosophy that try to be at once materially sound and logically unassailable, is to establish a structure of the same form as axiomatic disciplines — formal mathematics in particular: pick your axioms, and base the entire structure of your discipline on that. As I point out elsewhere in this text, that makes some sense in formal disciplines: in those one has latitude to choose axioms almost as one pleases.

When dealing with material reality on the other hand, the so‑called "axioms" really are assumptions about the presumed primitives. As such, no matter how logical or ingenious or obvious they seem, their validity is no greater than the degree to which they can be shown to match the nature and behaviour of the presumed primitives.

Accordingly, it is not valid to work on the basis of assumptions in the same way as working on the basis of axioms.

In this essay I think of "formal axioms" as the arbitrary, unproven, possibly internally meaningless, bases of formal systems in which the theorems are ultimately derived from their respective axioms; the formal part might not appear within the axioms themselves, or might be degenerate in the form that they do appear. What unavoidably must have form, is the process of deriving theorems from those axioms or previously established theorems.

In contrast, but in analogy to, formal axiomatic systems, "material assumptions" or "material primitives", right or wrong, are the assumptions on which we base our reasoning about physical "laws", "behaviour", or conclusions of "fact" in the universe as we seem to see it. Right or wrong, they have meaning in terms of what we seem to observe.

That is to say that they refer to items in the algebra of whatever universe they refer to, its objects or object types, plus the operations on objects in that algebra.

That is a big, big, contrast, so pardon me for being captious about people who vaunt their "axioms" when what they really mean is "assumptions".

So what is there to see in our empirical world, and what is there to do about it? If it looks like a toad, say I, waddles like a toad, and croaks like a toad, and I have no hangover, then perhaps it exists as a toad. That is not proof, but it might be of value as a working hypothesis, pending anything better. My guess might be wrong, but to convince me, any rival diagnosis would need support at least a little stronger or more persuasive than my own impression and assumption. My most sensible choice is to judge from my notional toad's apparent waddling and croaking and swallowing of worms. After that I act according to my ability to predict and explain, or at least speculate on, its doing whatever toads usually seem to do.

And of course I might be wrong in any of many senses, and in many details and principles, but the sense I rely on is common sense; if anyone has an alternative suggestion he would be welcome to propound it, but he had better command impressive powers of persuasion. To be sure, when I do find the persuasion compelling, I am willing to amend my own assumptions accordingly; but not till then.

Some dominant schools of philosophy in classical Greece had the opposite idea: they insisted that the world was in some sense illusory, so that abstract logic trumped empirical evidence. That assertion might have had some persuasive power, if only their logical conclusions were consistent, but in fact, their various philosophers contradicted each other wholesale, and sometimes bloodily.

As Edmund Burke remarked: "The nature of things is, I admit, a sturdy adversary ..." And where formal conclusions conflict with empirical evidence, the evidence. sturdily outfaces the logic of self-assessed philosophers. Formal or not, if an assertion, based on proof, no matter how persuasively, leads to assertions that conflict with empirical outcomes, then something is wrong with assertions or observations: if it is shown that the fault is with the evidence, that does not prove that the logic must be right. Both could be at fault, and in history, both commonly were.

Consider the history of flat-Earth theories.

And the same goes for the rest of the visible part of the universe. Anyone denying the existence or the nature of the observable universe, must present rival support at least a little more persuasive than the empirical evidence — the impression we get from what we can see or otherwise examine.

In philosophy, agnosticism has its merits in suitable contexts, but agnosticism offers no intrinsically better justification for rejecting a position than for supporting one. Rationally, philosophical agnostics can demand no more than that a favoured theory either supplies cogent, and cogently supported, argument, or that alternative proposals be considered equally seriously, if not necessarily given equal weight.

In other words, to argue that my inability to prove my point of view compels me to accept your equally unproven point of view, is asking too, too much.

It is very difficult (I, for one, have never seen it done, nor nearly done) to come up with any coherent and cogent assumption of basic truth about our universe. Cogito ergo sum is no better than a militant gesture, and Cogito cogitare ergo cogito me esse, however witty, is not much more helpful.

Now, having no basic factual assumption that we can rely on as unconditionally true, may sound really terrible, but matters could be worse. We still can look about us and seem to seem to see what we do seem to seem to see.

Remember the conception of milk in terms of the neck of the swan! How much better are our conceptions of our world in terms of our sense organs and brains?

On an analogous principle we can base hypotheses and rationalisations. We have no need to despair in the face of Thomas Nagel's question: "What Is It Like to Be a Bat?" It is not a question that I ever have heard of a bat despairing over. The things that we experience in our various lives, whether we be bat or mole or hawk, are parts of the same universe. This suggests that there is likely to be at least some functional sameness to our functional perceptions and to our dreams, if any: a sameness of how the universe imposes information on existing entities.

Such a sameness I call a plesiomorphism, by which I mean that there is, for our purposes, sufficient resemblance between notional entities, for us to regard mutually plesiomorphic entities or their aspects, as being — near enough for our purposes — equal. I distinguish between this and isomorphism, where isomorphism ideally would imply exact matching. For example, we might refer to a plesiomorphism between aspects of the reality we find ourselves in, and our calculations, or functional perceptions, or our dreams.

For my part, I see functional transmission of information between any aspect of reality, and any given entity, as fundamentally equivalent, whether the recipient is sentient, has a mind, or not.  Whether it is an anemone, a rock, a bat, or a mystic, transfer of information is the basis of physical cause and consequence. And it is plesiomorphic, not exact.

Apart from anything else, Quantum Mechanics does not permit the transfer to be exact.

But neither would the constraints of the nature of physical information in classical physics permit it to be exact.

Because we cannot be compellingly sure of any underlying Truth about which we speculate, or even sure of any meaning, we have scope for multiple possible hypotheses about anything, including our own existence. Wittgenstein for example remarked on never having seen his own brain.

Also, by our nature we are not in a position to comprehend everything at once, so we have to start somewhere if we are to start at all, and because there is no simple limit to arbitrary formal speculations on which we might base our world view, at least one practical option is to base it on our subjective empirical perceptions: what we seem to see, hear, touch, or otherwise perceive.

That sounds discouraging, reminiscent of the swan-to-milk Platonism; and yet, as I see it, that is better than the smugness of the classic Greek philosophers who dismissed empiric evidence as inferior to what they saw as rationality.

So if we have the time and tools and interest, we next can compare the most acceptable hypotheses and rationalisations, and select those that seem likeliest, most fertile, most rewarding, most consistent, and with the greatest capacity for progress to successive findings and rationale.

The choice we rate most highly at any stage becomes our working hypothesis. We then can make predictions to test the limits of our working hypotheses, and check the outcomes with observations that might affect our ranking of the relative strengths of some of our hypotheses. If the evidence changes, then we change our preferred hypotheses. As long as none is satisfactory, we adjust them, or try to think of totally new hypotheses, then back to the coalface for new reasoning or observations.

If no suitable observations are available, we actively try to design and execute experiments to yield helpful observations. If the desired experiments are beyond the resources at our disposal, we may be reduced to thought experiments: in effect we inspect our ideas to see how much sense their implications seem to make.

However, especially in an underdetermined universe, we never can tell whether our hypotheses about anything include any fully correct hypothesis, so the best we can do is to do our best to make do with our best and to better it as well as we can whenever we can.

Up to the present, unless your definition of science includes formal disciplines such as mathematics, practitioners of the disciplines of empirical science as we know it generally do not prove anything formally: they uncompromisingly work at establishing progressively more successful hypotheses about how various things work, or seem to work, or seem to seem to work (wake up down there, turtle number seven!)

According to their conclusions the practitioners of science (scientists, we hope!) establish explanations of what they have observed. If the explanations do not support their preferred hypotheses, they must adopt rival hypotheses to supplant them. If the explanations support no as‑yet‑considered hypotheses, then it is time to formulate new hypotheses. If the new hypotheses are worth anything, they must imply new predictions of observed or observable phenomena. Then, as far as they can, the scientists plan observations that could be expected to show which hypotheses have the greatest predictive strength and explanatory power.

Or something.

And the gold standard for evaluating any hypothesis is how well, how generally, and how powerfully, it predicts items that as yet are unknown, and how well it offers explanations of existing observations and suggests new hypotheses or even radically new topics or world views. A new Weltanschauung if you like.

This tedious reliance on ignominious blundering may sound very unimpressive, not to say unattractive, but in our last few centuries such scientific activity has achieved more than all of humanity had achieved in the last two, or six, or fifteen millennia, or longer — depending on who is counting. So, till something better emerges, science as she currently is spoke, deserves the respect due to success.

I freely and cheerfully, if somewhat wistfully, accept that there are things about the universe that I never will begin to understand and am not equipped to understand, and some things that possibly no one is equipped to understand or ever will be equipped to understand, but so what? Just a few centuries ago there were all sorts of things that we had taken for granted, accepted as primitives: they had been so accepted since before living memory, things that humanity not only did not know, but did not know that they did not know, and would have derided as nonsensical if they had been told of them. In fact, billions of people still deride everything they don’t understand, and rely instead on nonsensical fairy tales propounded by frauds whose scams they think they understand, and mythologies that have neither predictive nor explanatory power, either in planning or understanding anything.

Until science began to mature, engines wouldn’t drive vehicles, sun and wind and horses wouldn’t light a room at night, planes wouldn’t fly, vaccines wouldn’t prevent diseases, and antibiotics wouldn’t cure them.

No one who drives a car on a concrete or macadam road, flies in a plane, reads affordable books, crosses the ocean on a liner without making a will, washes with soap, wears a wristwatch, plays recorded music, watches television, writes on paper or wipes himself with it, benefits from hurricane forecasts and dental implants, girds at yellow fever, beriberi, and scurvy, or looks through a glass window — no such person is in any position to sneer at science. Tens of thousands of years of mysticism and ignorance and respect for seniority or tradition failed to give us the things that recent centuries have made so ordinary that Joe Public can afford them, or at least afford to use them, as a rule unthinkingly — in the affluent world anyway.

Things such as Boeings ...

To be sure, carpers criticise science, pointing out problems of health, wealth, and happiness arising from application, mis-application, and exploitation of science; they point out consequences of pollution, lifestyle, and destruction of resources, but the flaw in their rationality is not the nature of the problems, but in the fact that:

the science provided the power and the users provided the problems.

If they then refuse to use science to avoid or amend problems, the fault lies with them, not the science. One cannot rationally blame steel for being beaten into swords, instead of ploughshares, nor hammers for being used for murders instead of carpentry or blacksmithing.

The products of science in our intellects and industry, include many that we have been so familiar with for so long, and have put to practical and intellectual use in so many ways, that. most of us hardly ever notice them, do not even realise that they are real or necessary, nor even understand them at all. In fact, many of these familiar wonders are replacements of previous major advances, marvels that in their turn have been superseded and largely forgotten. And the latest miracle will not generally be the last, if ever there is to be a last; we have to learn yet more radical things before we can start counting turtles.

We will have to continue our search for ever more primitive primitives: if you like, to search for the bottom turtle. Counting turtles comes later, if ever.

I hope in this essay to deal with some examples.

 

Brass Tacks

You cannot question an assumption you do not know you have made
                        Richard Buckminster Fuller

Preconceptions, Mathematical and Other

My desire and wish is that the things I start with should be so obvious that you wonder why I spend my time stating them. This is what I aim at because the point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.
Bertrand Russell, Philosophy of Logical Atomism

 

Nothing in this work is a claim to reveal anything new in mathematics nor, for that matter, in logic or physics, but there is a surprisingly large population of persons confident in their mathematical competence, who take either unexamined or ancient assumptions for granted. Some of these assumptions need clarification before I continue, because the context is critically important to the topics I examine.

Pure Contention

University President asks:
"Why is it that you physicists always require so much expensive equipment?
Now the Maths Department requires money only for paper, pencils, and erasers.
And the Philosophy Department is better still. It doesn't even ask for erasers."
Related by Isaac Asimov

The first assumption I examine is an item of semantics, which is a field in the discipline of semiotics, as I already have explained. However, semantics commonly is a troublesome item in discussions and conceptions, and I am interested in the topic of semiotics, so put up with it or pass on by. I regard semantics as important in itself, and critically important in topics in which fine distinctions make all the difference.

This first such topic is the distinction between "pure" and "applied" mathematics and equally in the distinction between other "pure" and "applied” formal disciplines, such as branches of logic and some branches of philosophy.

Personally I prefer the term "formal" to "pure". This may seem a pointless niggle, but the evaluative overtones of "pure" introduce judgemental cross purposes into many a discussion. In context, "pure" doesn't mean anything anyway.

One thing that sustains the confusion about this hardy perennial topic, is the sheer confidence of many who deal with the assorted disagreements by slowly and loudly repeating their personal views to each other as proof of the obvious.

Well, good luck to them. Now then:

First the taxonomy — the classification if you like: the principles of defining, naming, and identifying classes of things and allocating particular things, particular entities if you like, to particular classes.

No matter what the application, useful classification depends on intrinsic, relevant differences between entities. If such differences cannot be clarified, there is no point to arguing about distinctions. If ducks had big flapping ears and elephants had feathers, ducks and elephants would be that much harder to tell apart. As it happens, those two attributes or parameters: big ears and feathers, are innate and intrinsic to elephants and ducks, respectively, so they suffice to tell them apart without recourse to other differences: "In the dark I feel feathers, so this is unlikely to be an elephant; no need to panic ..."

But, among their other attributes, the tameness or wildness of ducks and elephants will not suffice for telling them apart; those attributes are neither intrinsic nor innate nor diagnostic: one gets tame ducks and wild, as well as tame elephants and wild, and plenty of other creatures that might be tame or wild, and an initially wild duck or elephant might be tamed after some time. And whether a creature happens to be tame or wild, that will not be because it is or is not a duck.

Now, in practice nearly all the arguments about classifying formal disciplines on the one hand, as distinct from applied disciplines on the other, are about subjective, contingent differences. Some people assert that there are two or more totally separate classes of such disciplines, others recognise just one, asserting that there are no substantial differences at all between pure and applied disciplines.

One extreme of classification is that of G. H. Hardy, who denied that formal and applied maths had anything whatsoever to do with each other; Martin Gardner on the other hand claimed that there was no divide whatsoever.

When great minds differ on abstract issues, it might be that they are at cross purposes. One reason for cross purposes could be that they hadn't worked out the relevant functional semantics, in which case both might well be wrong, whether they were genius or not. Without functional semantics even great minds cannot command functional distinctions — if any distinctions at all.

In this essay, the first clue to a functional distinction lies in how the two classes of disciplines resemble each other. And resemble they do. For example: Whether "abstract" ("pure" or "formal") or "applied", both disciplines share largely the same mental mechanisms, the same laws of inference, the same basis in axiomatic structures.

From this point of view, both formal and applied disciplines can be seen as examples of what I call Implicatory Activity, because they assume that derived, or discovered conclusions are constrained (though not necessarily determined) by the fact that they must be implied by the axioms or assumptions.

Please note: this does not imply that the axioms or assumptions of the formal and applied workers need be the same, nor for that matter necessarily different, just that the concept of deriving assertions from basic axioms or assumptions concerning the outcomes of accepted operations on entities (such as that A implies B) is basic to both classes of activity.

If they conflict, then, in the case of applied work, either the conclusions or the assumptions must be adjusted or discarded. Analogously if the work is formal, a new branch of the discipline must be based on new, elaborated, or modified axioms that lend themselves to the desired discoveries. Classic examples might include the recognition of negative or imaginary numbers, complex numbers, transfinite numbers, or non-Euclidean geometries.

And we may ignore the nonsensical idea that, because all the steps in deriving any theorem are essentially tautological, therefore within an axiomatic structure no derivation can be achieved in essence. In practice the fact is that the very nature of an algebra whether mathematical or formal, is that it consists in a set of objects or object types, plus a set of operations on those objects. And a derivation within such an algebra is a sequence of operations upon information. And operations upon information are intrinsically material, physical — irrespective of how trivial or complex or formal they might be.

If you doubt that, try proving the likes of the Pythagorean theorem, or the four-colour conjecture, or Fermat's last conjecture, without operations that involve entropy. Formal operations are physical, and without operations, there is no outcome, whether the outcome is to be a proof or not.

And try to make sense of Gödel’s impossibility theorems in terms of physics.

Some people, Hardy for one, argue in essence that the pure and applied disciplines have nothing in common because they differ in their objectives: formal disciplines deal with the formal proof of abstract theorems, irrespective of whether those theorems have any meaning, while applied disciplines deal with valid derivation of assertions about objects of study: this implies that such assertions have meaning.

However, that argument about the objectives is not cogent; those objectives are intrinsic, not to the disciplines that we wish to distinguish, but to the practitioners. Accordingly, though those objectives, viewed as attributes, might affect the taxonomy of the practitioners, they do not affect the taxonomy of the disciplines, and it is the disciplines that we are trying to distinguish, rather than the practitioners.

To make that clearer by means of example, that operative difference is about as cogent as arguing that a hammer has nothing in common with a paperweight. Because of their extrinsic attributes, those that we apply as labels, hammers and paperweights certainly would not appear in the same category in the yellow pages or in a mail order catalogue, but to the user, their intrinsic attributes might well put them in the same category in a gale on a building site with the site plans threatening to blow away: a hefty hammer might then make a very good paperweight.   

In contradicting Hardy, Gardner argued that "recreation" is itself an application. In this his logic was no better than Hardy's, but anyway he concluded in effect that, since absolutely any formal discipline could be applied as recreation, all maths is by definition applied.

However, in systematics such an argument is futile because objectives such as recreation or purely formal mathematical activity plainly are intrinsic to the practitioners; they are not intrinsic to the disciplines. Permit me to stretch an earlier analogy: a tool such as my hammer can serve effectively as a stationary paperweight by virtue of its intrinsic heaviness; but its belonging to me is not intrinsic — so if someone stole it, it would cease to be a paperweight for me, but in its new extrinsic role as a possession of the thief, it would be as good a paperweight or hammer, or as valuable an asset to pawn, as if it had never been mine.

The point of that analogy was to illustrate that objectives in performing the mathematics do not affect the operations you perform in working the mathematics, whether in performing a proof or derivation, or in calculating a conclusion, whether in recreation, in theory, or in engineering.

In formal mathematics the typical activities and their objectives are to design axioms, prove theorems, and so on. If instead you also use the same mathematics to deal with the description of the nature or activity of some object other than yourself and your mathematics, it need not follow that you use different mathematics. Whether applied or not, it is not the mathematics that differ intrinsically, but the practitioners or the problems.

So far, no hard distinction.

Where the intrinsic differences begin is that in "pure", "abstract", "formal", disciplines one may choose axiomatic structures at pleasure, as long as the axioms are internally consistent (or paraconsistent) and agreed upon. In practice one could go further; one also might demand that the axioms be mutually independent, and be intellectually fertile — in other words that the axioms are not negligible. It might be nice if they also were complete, parsimonious, elegant, mutually relevant and so on, but we must not be greedy. Traditionally axioms were chosen as “plainly, intrinsically, true”. (In using the inverted commas I do not represent those words as a literal quote, but demarcate them as a concept.)

For good reasons however, the truth of formal axioms is no longer generally accepted as relevant, nor even necessarily meaningful, let alone necessary.

In particular, in a purely formal discipline the very concept of "truth" is doubtful; the closest we can get to “truth” is to show that some particular conclusion follows from certain axioms, not that that conclusion is true or false or even meaningful in any other way. Ideally this means that to prove a proposition X we must be able to show that the statement of X amounts to the restatement of one or more of our axioms in some particular sequence, form, or context. In theory that is what formal proof amounts to; in practice of course, such a viewpoint tends to be too puristic, even for formal mathematics.

Anyway, within those limits, if I present you, as a "pure mathematician" or "formal logician", with an axiomatic structure, and you find my axioms inelegant or redundant, or offensive, or uninteresting: bad luck!  Your view may amount to valid criticism of my taste or mental limitations, but that is not the same as refuting my axioms, nor any theorems I validly derive from them. To refute an axiomatic structure on the basis of claiming that the axioms in isolation make no sense, is something that arises in applied mathematics, not purely formal mathematics.

To apply a formal discipline (typically a branch of mathematics or an axiomatic structure in mathematics) generally means that one uses that discipline to model some part of the behaviour of some distinguishable set of objects, typically items or a structure or process that might not necessarily be part of that same formal discipline itself. So, one might apply maths in studying the distribution of primes (applying maths to formal maths) or one might apply mathematics to studying the distribution of trees and pests (applying maths to ecology). One might apply formal logic in studying ethics. (applying formal logic to philosophy) or apply ethical theory to the study of business (applying possibly formal philosophy to human affairs, economics etc).

But such modelling demands that the logical structure of the part of the formal discipline involved, say maths or logic, is isomorphic or at least plesiomorphic to the relevant behaviour or nature of the subject, say mechanics or epidemiology, or indeed, mathematics.

Anyone trying to impose such a concept as axiomatic to an application of a formal subject, would have to explain very carefully whose axioms they are, and what relationship they have to of the physical universe. Until we understand, and can demonstrate an underlying algebra of physics, we cannot meaningfully speak of any fundamental axioms of physics, only axioms of physicists. And in dealing with empirical realities, assumptions of context-free, unconditional truth can hardly be realistic.

To call them axioms rather than assumptions is not cogent. The plesiomorphism of the application must be sufficiently close for us to describe or measure or predict relevant aspects of the object with sufficient reliability and precision to make our efforts worth while in the context in question. Precision need not in all cases be absolute, but it must be adequate in terms of our assumptions.

Isomorphism in this sense (the word is used in several senses in which some practitioners seem to assume that their own parochial definition is definitive and exclusive) means that in the mathematics or other formal discipline that we apply to the subject matter, there is a logical structure that matches the logical structure of the relevant part of the subject. So, if we apply a process correctly according to our axiomatic structure, we expect to get an answer that, sufficiently nearly correctly to meet our objectives, describes the entity or event we are calculating.

As I have mentioned, I have coined plesiomorphism to refer to application where sufficiently nearly correctly means that you don’t expect to be absolutely correct, whereas isomorphism literally means that you expect to be notionally absolutely correct in every relevant respect.

In either case, it also means that the resemblance between the abstract logical structure, and the practical, applied structure is sufficiently close, though not necessarily absolute.

For example, if I use the common arithmetic of integers to calculate the number of apples in a regular row, counting should yield a precise number of apples because the relationship between the apples and the row usually is simple and matches the relationship between integers and the cardinal numbers of elements in a set; on the other hand, counting will not yield me an exact mass or volume of apples, because apples vary in mass and volume, both from second to second (as the apples respire or their moisture evaporates, or as moisture condenses on them) and from apple to apple, because apples are no all identical, so that the mass and volume deduced are not precise, though they might be satisfactory in practice.

This does not invalidate counting as a basis for the estimate of mass for routine purposes. Being informed that ten apples on a tray will weigh a kilogramme will not generally be correct, but it will commonly have a high probability of correctness within an order of magnitude, distinguishing the weight of ten apples, from the weight of 100 apples, or of ten strawberries or elephants.

Similarly most other practical physical applications of mathematics are inexact. Consider measuring length with a ruler, calculating a numeric value as represented by the length of its representation on a slide rule, estimating light intensity with a photo cell, deducing precipitation from the reading of a rain gauge, predicting the path of a meteoroid from sightings through telescopes, etc. All of these intrinsically differ from most formal mathematical considerations, even when exactly the same operations are performed on the same variables.

The subject might be material; we might use calculus and Newtonian laws to predict say, the flight of projectiles or volumes or surfaces of containers, but we need Einsteinian theory for yet more highly precise prediction of long‑range space trajectories.

On the other hand, the subject might be formal: we might use probabilistic arguments in dealing with the occurrence of prime pairs or Goldbach numbers or the distribution of digits in the decimal expansion of pi.

The isomorphism between model and subject might be precise, as in counting discrete events, or it might be plesiomorphic, that is to say, rough, but precise enough to be useful in relevant contexts, for example in simplifications such as ignoring air resistance in dropping a cannonball from a tower, or it might be contingent, such as in using part of a mathematical curve that conveniently matches a totally different function over a limited relevant range. In formal Euclidean mathematics, you cannot calculate the value of the diagonal unit square, but in application to the Euclidean construction it becomes a problem in physical measurement, and quite simple down to nearly molecular precision.

An example of plesiomorphism between maths and measurement.

Except in one or two respects everything that we said about the purely formal discipline applies to the applied discipline; commonly it even might be the same discipline.

One exceptional respect is that in applied mathematics the choice of axioms is no longer free: there now is an added requirement.

It is a requirement so fundamental that one could argue in favour of at least limiting one's use of the term "axiom" in applied mathematics: it might be better to speak of assumptions. Those assumptions, no matter how ingenious, how old, or how new, must be sufficiently compatible with the structure, the behaviour, of the subject that your system is intended to model, describe, or predict.

Otherwise they are not usefully applicable; your plesiomorphism is inadequate.

In short, in applied fields we must add the concepts of sufficient truth of axioms or assumptions and sufficient truth of deductions or theorems. I am inclined to prefer the term "assumption" rather than "axiom" in applied mathematics or other applied disciplines, and "assumptive" rather than "axiomatic", whenever there is. a material constraint on how the assumption is to be formulated or to be applied to the subject matter.

Such an assumptive structure, whether meeting all the demands of formal mathematics or not, if making inappropriate assumptions by failing the proper isomorphism or plesiomorphism, would be wrong.

One cannot for practical purposes substitute say, Cantorian set theory for partial differential equations in dealing with orbital mechanics, or for everyday arithmetic in bookkeeping; such examples violate the principle that the intrinsic attributes of the assumptions in applied mathematics must have the necessary isomorphisms to the objects and operations they refer to in their respective applications.

For example, addition of infinities is not at all the same as addition of real numbers.

Less essentially, more than one logical structure might be applicable to the same problem, though very often some such structures will be more profitable than others. For an artificial example, there is no fundamental reason why one might not use complex numbers, or even octonions, to count apples, but for reasons of convenience it is not common practice.

Again, if the only reason for the counting is to compare two sets of apples, even numeric counting might be overkill; matching the apples in one set with the apples in the other set, might be adequate, so why introduce all the axioms or assumptions appropriate to the arithmetic of. integers?

In short, we have added another, weaker distinction: feasibility or cost; parsimony, if you like. In formal mathematics we do not insist in all connections that a calculation need be practical or even physically possible; physical possibility might not be of mathematical interest at all. In applied mathematics on the other hand, it is necessary for calculations to be practical as well as for plesiomorphisms to be adequate.

It is possible to choose consistent and meaningful assumptions or related specifications in applied mathematics, specifications that definitely are wrong for the application. For example, to calculate the necessary working strength of a rope for slowly raising or suspending static loads, it usually is sufficient to add the weights of the object in any one load.

In contrast, I once read that some arts students wishing to set some record or other, allegedly used elementary arithmetic to calculate the necessary strength of a rope required to support the static weight of a group of students. They allowed an arbitrary safety factor, as commonly is required in applied mathematics, then used that rope to support a swinging mass of students.

The ignorant students did not realise that there are differences between the logical structures of statics and dynamics ...

The rope broke, causing serious injuries — some fatal if I remember correctly.

Wrong assumptions (or axioms?)

Again, when calculating the strength of steel cables for lowering cages into very deep mines, it is common to neglect the weight of the cage and its contents, and instead calculate the weight of the cables.

Formally this is wrong.

Plesiomorphically it is quite acceptable in applied maths in engineering.

Furthermore, in applied disciplines, the concept of precision is likely to be relevant: precision must match or exceed the required precision of observation and prediction, but also must not exceed the required precision by too much, because that may be expensive or imply inaccurate gratuitous assumptions of practical realities. To calculate a human’s height to the nearest micrometre would suggest to a reader that such a measurement were possible, whereas even calculation to the nearest millimetre would hardly ever be practical or useful, or anything better than delusory.

For example, after sleeping overnight in bed, we are several millimetres taller than when we go to bed at night after a full day of working erect.

In formal disciplines the concept of precision might not even arise — precision could be absolute in theory: the same arithmetic rules reign in the googolth decimal place as in the first.

On the other hand, paradoxically, in some subjects the formal mathematician might scorn to contemplate precision at all; Otto Frisch related that Stanislav Ulam complained that as a formal mathematician he was used to working with abstract symbols, but had sunk so low that his most recent paper in the fission bomb project, not only contained numbers, but that some even had decimal points! 

That was an example of applied mathematics as seen by the formal mathematician. In contrast the formal mathematician does formal work according to form (otherwise of course it is not formal).

How does science fit into this? That is not easy to resolve, because the definition of science is largely arbitrary. Mathematics used as a tool in science fairly clearly would be a category of applied maths, and notionally has about as much to do with formal maths as the arithmetic of the shop assistant counting apples into a bushel has to do with formal maths.

Commonly (though not universally) we do not count purely formal disciplines such as mathematics as science, because they do not necessarily have much to do with anything outside themselves. And procedures within the discipline are in effect the juggling of axioms and the theorems derived from them.

Personally, for several reasons I reject the idea that this distinguishes mathematics from "science"; for one thing, mathematics of absolutely any kind deals with information and amounts to physical manipulation of information, for instance by showing in effect that a given theorem comprises at least some of the same information as one or more other theorems or axioms, or their necessary implications; and those too comprise information.

And information I unapologetically regard as being part of the subject matter of physics.

So I see addition of 3+1=4 as a physical operation.

But suit yourself about that point — it hardly matters in this context.

There is at least one other aspect of the comparison between many forms of formal work, as opposed to applied work: fundamentally their objectives are almost opposites: formal work, whether exploratory or striving towards an objective, derives everything from the axioms that are unreservedly accepted as unassailable; if that does not lead to anything sufficiently constructive, then one necessarily adds or changes axioms, and this amounts to moving to a different axiomatic basis, not to correcting a wrong axiomatic structure.

For example, Cantorian infinity theory accepts certain axioms that conflict with basic arithmetic theory; the principle that aleph-m plus aleph-(m+n) (where n>m) generally equals aleph-(m+n), is not generally valid in finite number theory. And the idea that there might be a smallest infinity contrasts sharply with traditional number theory, in which. there is an x-1 for every value of x, in which sense there is no smallest number. And the question of whether there is any infinity between two alephs was never settled before new axiomatic structures were proposed.

Note that this does not amount to showing that the earlier axiomatic structures were wrong, just that they had not been shown to be suited to the problems hitherto under consideration.

In applied or material disciplines the opposite is the case. Assumptions are as a rule proposed to be conveniently close to truth, at least until continual attacks show them to be unacceptably false, in which case it is necessary to modify them in whichever aspects they have been falsified. Often this research takes the form of comparing rival assumptions to find which ones stand up best to falsification.

And in theoretical branches of science, attempting such falsification is perennially under way. In applied science this is largely true as well, but as with any other applied activity (technology, if you like) earlier, notionally discredited, assumptions might widely be used as plesiomorphic tools of convenience: local maps may assume a flat Earth; Newtonian orbital mechanics are good enough for terrestrial navigation and major eclipses; elementary valency theory is good enough for routine chemistry; and so on.

But for exploratory science, seeking the essence of aspects of reality, the relentless attempts at falsifying assumptions is a major aspect of the vocation. The contrast with the likes of mathematics or logic, is stark.

If such things do not interest you, it now is safe to open your eyes and read on. This section largely covers what I have to say about formal, as opposed to applied, mathematics and reasoning.

Or does it? Metaphorically I bite my tongue.

Science, evidence, and near‑proof

The more important fundamental laws and facts of physical science
have all been discovered, and these are now so firmly established that
the possibility of their ever being supplanted in consequence of new discoveries
is exceedingly remote.
Many instances might be cited, but these will suffice to justify the statement that
"our future discoveries must be looked for in the sixth place of decimals". 
Albert Abraham Michelson.  1903

In keeping with foregoing discussion, there are two classes of science: formal and empiric (also called "analytic" and "synthetic").

Empirical science deals with the world we seem to see ourselves in. In empirical science we have no unconditional axioms about that world — we can do no more than propose theories based on assumptions about our observations and the perceived behaviour of the world. For instance we generally assume that:

  • the world operates on principles consistent enough for us to generalise meaningfully when appropriate
  • such information as we can derive about the world from our sensory perceptions forms a practical basis for a mental image, a model that has relevant and practical plesiomorphisms to some sort of presumed underlying reality that has a meaningful relationship to that which is apparent to us
  • the theory of probability may for practical purposes be assumed to be isomorphic or plesiomorphic with relevant aspects of the behaviour of the perceived universe. This is the basis of the ubiquitous applicability of statistics as a practical and philosophical tool in science.

The current discussion is mainly about empirical science — formal disciplines have little to do with belief, since one can construct as many independent assumptive structures as one likes, and can design them to be compatible with practically any coherent belief one likes. These structures would not differ from each other in their "correctness" but only in their interest or usefulness and applicability. Whether such formal disciplines are relevant to anything material, is another matter.

In spite of the popularity of the phrase: "scientific proof", empirical science has little to do with formal proof; because the inherent uncertainty, empirical predictions and observations cannot formally prove anything, but they do permit us to compare the defensibility of rival hypotheses that imply observable phenomena. Observations that constitute confirming instances of predictions, can serve as a basis for working hypotheses: they are a weak form of support, abductive or inductive, that can be assessed in terms of statistical theory.

This is all on the assumption that the hypothesis has been suitably expressed for the procedure to be meaningful. Experiment design is a treacherous field because it is subject to the principle of GIGO: "garbage in: garbage out'. Even modern scientific practice accommodates a great deal of garbage in, and thereby puts out a great deal of wasted research, outright delusion, or even bad faith.

The fundamental reason that much of such work is wasted, or at best expended for little reward, is that it is based on misconceptions or misformulations or simplistic guesswork; a fair number of peer‑reviewed works get published in spite of being based on just such research; after having missed a hidden conceptual flaw the researcher may perform the rest of the work coherently and competently, and then it might be hard for a reviewer to spot the relevant flaw. Even having spotted it, it might be a struggle to justify the view that the paper is ill‑founded. A major source of such disasters is not poor work, so much as experiments based on preconceptions or poorly constructed or inapplicable questions. Even flawless work on meaningless questions produces meaningless answers, and preconceptions often mask or rationalise the futility.

Whether experiments in good faith and good practice in science have been well designed or not, if the observations are too poorly consistent with the predictions, we discard the hypothesis, modify it, or try again with a totally new hypothesis. We never prove it. We never forbid anyone to doubt our work or re‑test the hypothesis or propose alternatives or extensions. We never demand that anyone accept a hypothesis. We only refuse, when anyone proposes an alternative, to accept such an alternative before we in turn have convinced ourselves of its merits.

It does not matter whether this is necessarily because "we" as "scientists" are so virtuous, so liberal minded, that we would never dream of imposing our diffident opinions; we know too well that if we did try to impose them it would have little effect. That is how the process of science works.

Science depends on conviction.

Conviction by compulsion certainly has worked very frequently and widely in history and in contemporary education, religion and politics, but as conviction goes, conviction by compulsion is transient. After a century, or a professional lifetime, or sometimes within a year, future generations, rightly or wrongly, will come to hold it to scorn.

It does not follow that because a hypothesis is untestable by any observation accessible to me, it cannot be investigated and falsified by some other subset of the scientific community, perhaps even by a single member. Members of such a subset may be perfectly scientific in their work, no matter how scientific or unscientific my work had been. Nothing in the nature of science guarantees that every proposition that is meaningful in terms of falsifiability to one worker, must be equally meaningful to every other. There might be differences in skills, in equipment, in resources, in chance observations. How is one to react to a scientific claim that one is not in a position to test personally? Is every such claim meaningless by definition, to everyone but the comprehending observer in person?

Not necessarily. It depends on our personal world view and intellectual taste, how high a level of confidence we demand before we are willing accept a given assertion as a working hypothesis. The principles of science neither demand that we believe, nor that we disbelieve. The world is too large for everyone to investigate all of it personally in detail. And as I already have pointed out: we cannot delay elementary classes while each student personally verifies every individual assertion.

In discriminating between rival hypotheses, we need not consider only formal falsifiability by personal experiment; it is reasonable and in practice it also is necessary, to give appropriate weight to weaker evidence, such as:

  • a claim's consistency with our experience and opinions
  • the word of other observers
  • the opinions of persons whose skills we respect
  • its consistency with coherent and logical bodies of theory
  • other criteria than direct evidence, such as parsimony and explanatory richness.

None of these is proof either, but they are useful in practice and historically have been of enormous power and value.

Weak or indirect evidence still is evidence — evidence, I repeat, is every item of information that has weight in rationally influencing one's choice of particular hypotheses as being the most persuasive — or completely untenable. Strong evidence carries the most weight; weaker evidence carries correspondingly less. There is no generally cogent basis for assessing the weight to assign to any item of evidence for any particular item; its strength keeps changing according to context and in the light of new evidence, and in any case one's appraisal of context and weight necessarily are largely arbitrary.

Except in religion there is theoretically no such thing as absolute evidence, only a range of cogency that extends from an interesting speculation at one extreme, to repeated, independent, precise, practical observation, predictable, quantitative, and explicable, at the other.

There is yet another problem with the concept of formal proof in empirical science: because of the principle of underdetermination, we never can show formally that we have listed all possible meaningful hypotheses about something that is observable and falsifiable in principle. It accordingly is not so much as possible to prove that the correct hypothesis (the "god's‑eye‑view", or some simplification or representation thereof) either is the one that our observations support best, or even that any part of it is one of the alternatives that have been considered.

We cannot even be sure in principle that our conception of the phenomenon is framed in terms that can meaningfully be related to the "god's‑eye‑view," the G‑E‑V.

To illustrate this very important point, consider someone at the level of technological sophistication of the typical hunter‑gatherer, who for instance had never seen or heard of magnetism or electric sparks or currents, and had no conception of electricity or magnetism: such a person would have great difficulty at several levels, formulating a meaningful theory about how a battery-operated fan works. Or imagine a remote islander who happens to have no knowledge of modern technology: he encounters a battery and a radio transmitter. He finds that if he puts the battery into a likely-looking slot, some lights go on. He soon recognises this as an emergent effect. and not one that he could have predicted. A radio technician could have predicted it, but the islander, no matter how intelligent, is not that kind of technician; he understandably assumes that producing the visible light is the function of the transmitter‑plus‑battery.

What he cannot see, and is not trained even to imagine, is that the light he sees is not the assembly's primary function, which is the invisible radiation of radio signals. Nor would he guess that a suitably matching distant receiver of the radio signals could say, reproduce sounds detected by the transmitter’s microphone, steer a drone, or set off a bomb.

We in turn have no idea at present, how many levels and dimensions of sophistication we stand below the TOE of the G‑E‑V.

To be sure, we have some persuasive views about our the current standard of our scientific world view, but so did Archimedes, Galileo, Newton, and any number of 19th century geniuses. Until we have some better perspective on our own level of sophistication, it is not for us to sneer at that hunter-gatherer.

Guess, Grope, Gauge, Accommodate

He's not of none, nor worst, that seeks the best:
To adore, or scorn an image, or protest,
May all be bad. Doubt wisely, in strange way
To stand inquiring right, is not to stray;
To sleep or run wrong, is. On a huge hill,
Cragged and steep, Truth stands; and he, that will
Reach her, about must and about must go,
And what the hill's suddenness resists, win so.

John Donne Satire III

A major problem I encountered in composing this essay, was trying to sequence the topics. I find it hard to tell when to approach such material bottom‑up, and when top‑down.

This is consistent with what I regard as the most valuable lines John Donne ever wrote. The heuristic nature of scientific progress may demand alternating attacks, first one way, say top‑down, then a different one, very likely bottom‑up.

Or transversely?

 To insist instead on imposing your preconceptions all the way through, usually top‑down, or just confused, tends to harden the mental view and rationalise or complicate ideas where rigidity would be a blunder at best.

 

De‑, In‑ and Abduction

What is called understanding is often no more than a state where
one has become familiar with what one does not understand.
Edward Teller

Deduction, induction, and abduction are loosely-defined, loosely‑used terms for some of our commonest forms of reasoning, especially reasoning in science. Various authors in various languages have defined their concepts variously and inconsistently for millennia rather than centuries. I do not undertake to deal with them coherently, partly because of the sheer volume of the existing published material, and partly because there is no item in the entire topic on which various authors have not contradicted each other or themselves, either in definition or in practice, in logic or semantics.

None of the more supportable versions of their views is purely right or wrong in itself, and they are not as cleanly distinguished as some of the smuggest authors suggest, but whole lists of fallacies violate their various principles one way or another. This chapter is a commonsense (or at any rate, informal) exploration of a few modes of thought; I do not claim to offer anything definitive myself.

So don't waste energy on pedantic criticism; you might like or reject these views, but my intention is not formal instruction: only to supply a basis for thought, or gaining a perspective of some of the views I try to express in this essay.

Deduction

As a method of sending a missile to the higher, and even to the highest parts of the earth's atmospheric envelope, Professor Goddard's rocket is a practicable and therefore promising device.
It is when one considers the multiple-charge rocket as a traveler to the moon that one begins to doubt. for after the rocket quits our air and really starts on its journey, its flight would be neither accelerated nor maintained by the explosion of the charges it then might have left.
Professor Goddard, with his "chair" in Clark College and countenancing of the Smithsonian Institution, does not know the relation of action to re-action, and of the need to have something better than a vacuum against which to react .
Of course he only seems to lack the knowledge ladled out daily in high schools.
New York Times Editorial, 1920

Let's first deal with deduction; it arguably is the least contentiously described. Superficially it seems to be the tightest form of reasoning, because it is the basis for formal proof. Popper's major works largely were directed at finding means for basing fundamental reasoning in research on deduction rather than induction. To my mind he failed dismally, largely for insufficient recognition of the difference between formal and applied reasoning, and their respective relevance. I also saw precious little sign of his appreciation of the significance of underdetermination.

As I see it Popper's falsification principle was a blunder; I suspect that in practice one might argue that deduction is not the most, but the least, valuable form of reasoning in science.

Mind you, deduction as a mode of deriving a conclusion, whether tentative or firm, is neither dispensable nor even unimportant, but much of what is intended or purported to be deduction is neither formally nor functionally deduction at all.

Commonly people take the word deduction to mean any logical line of thought that leads to solution of a puzzling problem, but that is not at all the precise technical meaning. Consider:

The basis of deductive reasoning is binary logical implication, in which:

If A implies B then:

if A is true, B is true;
if A is false then B could be either true or false.

For example:

If the clock is set correctly and the time is three o' clock,
the clock will strike three.

If not, it variously might strike three or any other number of times, or not at all, and it might do so or not, whether the time is three o'clock or not.
Probabilities might be relevant too, meaning that the logic need not be fully binary, but let that pass for now.

The name for this mode of reasoning is modus ponens, but I mention that only in case someone tries to impress you with the Latin.

The other most prominent mode of deduction is called modus tollens, and in research it is arguably more important than modus ponens:

If A implies B then:

If B is false, then:

either A is false, or A does not imply B.

This reasoning is the basis of the mode of research logic called falsification:
For example:

Diamond can scratch glass, so I can test whether a crystal is a diamond, by trying to scratch glass with it. If it fails I can be sure it is not a diamond (at least if I can be sure that the glass is really glass; remember to keep underdetermination in mind!) We say that I have falsified the hypothesis that the crystal is diamond.

However, if the crystal does scratch the glass, that test can at most weakly verify or support the hypothesis; it cannot formally prove by deduction that the crystal really is diamond: for one thing, some other crystals, such as corundum and carborundum, can scratch glass as well. Our test was useless in some contexts, but not in all; we have at least eliminated most possible alternatives, such as that the "glass" is diamond, and the crystal is sugar.

That is about as much as I can offer here, because the subject is too big. Still, that little hint might help you avoid some of the rawest failures in common sense. And in science.

What is special about deduction, properly applied, is that although, as a mode of reasoning, it cannot prove everything, nonetheless whatever it can prove, really is proof in formal subjects such as logic and mathematics.

But again, and always, and especially in applied logic, as opposed to formal, beware the treachery of underdetermination.

Such deductive power looks very tempting in science, and many thousands (millions?) of junior students have fallen for it, especially in the form of falsification, but deductive logic is no silver bullet. At first sight deduction seems almost infallible, but one must rely on having some standard assertions known to be true: some relevant "facts" from which to derive one's conclusions. The history of science is rife with confident conclusions derived from faulty observations, faulty assumptions, personal delusions, or received wisdom, and yet totally wrong in spite of having been taught by generations of authorities.

Formally we have no such thing as an empirical fact at all, and if anyone claims that you can derive formal truth from empirical data, smile politely and change the subject.

Deduction also is as close as we can get to firm proof in empirical science, that is to say as a rule: applied material studies. That sounds marvellous of course, but in fact, beyond formal studies, science has very little to do with formal proof at all. More frequently we work on construction of hypotheses and comparison of rival hypotheses to see which are most powerfully predictive, or even which ones, known to be formally false or falsified, are most useful or convenient as fictions or plesiomorphisms.

Such fictions commonly occur as plesiomorphisms; for example, although we know this to be incorrect, we commonly say that a thrown ball follows a parabolic trajectory; that is close enough for most everyday purposes such as cricket and shooting, and is a lot easier to calculate than ellipses and air resistance.

So, in practice, deduction in applied fields is a slippery tool, and imprecise at best. As I shall show, induction and abduction are more frequently useful in research, once we get past obviosities such as:

A camel that easily can bear a total burden of 100 kg, easily can bear a smaller burden, and a 10 kg burden is a smaller burden than 100 kg.

This camel easily bears 100 kg;

Therefore this camel easily could bear 10 kg.

We don't usually notice such obviosities, because we take them for granted, but we rely on them continually, so perhaps we should be more aware of them; if we uncritically drop our guard, fallacies creep in.

Such insights are not new. They are the basis of many puzzles. Ambrose Bierce satirised them more than a century ago, as follows:

The basic of logic is the syllogism, consisting of a major and a minor premise and a conclusion — thus:

. Major Premise:. Sixty men can do a piece of work sixty times as quickly as one man.

. Minor Premise:. One man can dig a posthole in sixty seconds; therefore —

. Conclusion:. Sixty men can dig a posthole in one second.

This may be called the syllogism arithmetical, in which, by combining logic and mathematics, we obtain a double certainty and are twice blessed.

A more familiar modern example, is that you cannot produce a baby in one month by impregnating nine women at once.

Deductions from considerations such as whether or when to seek a vaccination, seem to be beyond the capacity of many people.

Another problem, even more serious in my view, is that formal deduction, in spite of its obvious strengths, is very poor at discovery, at generating new insights, exploring new ideas and hypotheses, or seeking solutions to problems. We see more about this in considering thought experiments.

Induction

It is a well established and repeated observation in the practice of science, that
the greatest scientist is not necessarily the one who finds the best answers,
but very likely may be the one who frames the best questions.
Anonymous

Now let us consider induction.

A lot of our terminology in science, mathematics, and philosophy is inconsistent, often for historical reasons. This is logically trivial, but can be troublesome and confusing. The term "induction" is one such, and induction comes largely in two flavours: mathematical induction, and empirical induction (or inductive reasoning), though the actual terms vary wildly in their usage.

Both forms of “induction” are valuable in practice — insofar as one manages use them intelligently and appropriately: Joe Average commonly does not even know about mathematical induction, and has no clue about how to use empirical induction validly.

The mathematical version of induction is pretty tight reasoning; in spite of the name, it really is deductive in nature. The use and form of the method varies slightly according to convention, but I do not urge any particular convention; suit yourself about the details. Fundamentally it applies wherever:

  • One identifies a set of objects and can show that there is a procedure (in this sense, an algorithm) for enumeration of the set, such that the enumeration is certain to include every member of the set exactly once,
    and:
  • One can show that a particular assertion is true of at least some first member of such a set under such a form of enumeration,
    and:
  • One can show that if the assertion is true for any member of the set, then it will be true for the next member in the enumeration, for as long as there is an as‑yet‑unenumerated member,

Then the assertion will be true for every member of the set.

For example, suppose that we can show that if we keep adding whole numbers (integers), starting from zero, and going up one at a time (0+1+2+3…n+(n+1)) then each time we add the next integer then the sum we get, will be half the product of the last integer times the next integer. This is easy to show, as Gauss demonstrated while he was still a child at school.

If we then can show also that it is true for any one example of two integers next to each other, then we know that it is true for all the following integers.

And that too is easy, because if we start with 0, we see that 0 plus 1 gives 1, which is the same as half of 2 times 1. And, unnecessarily for the purposes of proof: the next step gives 3, which is half of 3 times two and then we add 3, giving 6, which is half of 4 times 3 etc.

If you like to play with such ideas, you can find any number of such examples of induction. Mathematical induction is enormously powerful and versatile; it crops up in all sorts of applications.

If on the other hand you don't like dealing with numbers, then you might prefer to think of something solid, like a chain. Think of a chain of links in an unbranched, single chain in a bucket. As long as you have found the first link, with another link to come, you also can see that every link is linked to exactly one link more. If there is no next link, you know that is the end of the chain. And you can change the assertion to deal with a closed loop of chain instead of a chain with two ends.

And so on. Mathematical induction is a way of proving things that are true about whole sets of certain categories, one member at a time, even if you do not do the exercise for each set in a category. For example, in summing numbers in the way I mentioned, I know, without having to do the addition, that if I solemnly were to add all the numbers up to say, a million, I should get 500000500000; a single multiplication and division will do the trick. Similarly, for the chain: I don't have to work my way along it, nor need to know how many links there are, to know what the outcome would be if I did.

Where people tend to come unstuck with mathematical induction, is in one of two places: either they forget to prove that the statement holds for a suitable starting case, or they forget to show that it must show for every following case. They tend to pick three or four cases, and as soon as they fancy they see a pattern, they assume that they have settled the case.

But it ain't necessarily so. Let's prove that 720720 is divisible by all smaller numbers.

Simple: divisible by 1, yes, 2, easy, 3, OK, 4, right. That proves it, yes? What? Not satisfied? Very well: 5, 6? Obvious, huh? Oh, you want to be difficult ...?

Well, we carry on a bit and soon it is so obvious that only a fool could be left in doubt: 720720 is divisible by every number smaller than itself.

But we have not begun by showing that divisibility by n implies divisibility by n+1; in other words we have not applied the mathematical induction proof. All we have is a conjecture. We might be able to prove it by some other test, but what we have done so far, whatever it looked like, was not mathematical induction, and proves nothing.

And sure enough, it turns out that 720720 is not divisible by 17, 19, 323, nor by most other numbers smaller than itself, not even most numbers smaller than its square root. Our sloppy attempt at mathematical induction had misled us.

Half‑doing mathematical induction or formal induction in general, is useless at best, and generally misleading as well.

Empirical inductive reasoning is a different matter.

Unlike mathematical induction empirical inductive reasoning means trying to find whether a given guess about all members of particular set is true, by going out to inspect some members of the set; if none of your examples proves the guess wrong, then you empirically assume the guess is true. The currently fashionable historical example is the guess that all swans are white. For thousands of years that is what Europeans believed, and only with the discovery of Black Swans in Australia, was that particular example of empirical induction shown to be false.

The black swan debacle led to a lot of fuss at the time that black swans became known to biologists of the West, because some European biologists of the day thought the idea of black swans was absurd; in those days in fact, the very expression "a black swan" was used for something absurd, much as we might speak of "a mare's nest" or "hens' teeth"; and in some quarters the first black swan specimens brought back to Europe were met with accusations of fakery.

Wasn't that silly of the biologists of the day?

Well, maybe.

But what a lot of people have missed, is the fact that a lot of the strange animals that explorers of the day brought home really were fakes: monkey forequarters sewn onto fish tails were sold as mermaids, and so on. And of course, a duckbilled platypus was an obvious insult to anyone's intelligence.

So what?

And what is more…

When they found a black object, if they bet that it was not a swan, they would win nearly all the time. In fact, before Australia was discovered, they would win all the time.

This is an important principle in science and sense, and it needs to be taken into perspective in dealing with the real, the empirical, world.

So, when we apply the same style of thought to empirical reality, drawing superficial conclusions from a few convenient examples, it is hardly surprising that we can go badly wrong. We rarely find anything to assure us that if something happens one way once, then we know for a fact that the next time the outcome will be exactly the same: shake a stick at one dog and he will cower; the next dog might tear your throat out.

In empirical induction we generally work on the assumption that what we seem to see happening a few times with the same sort of outcome, is always the same thing, and is what always happens. So you see a plain brown snake and catch it: it bites you. OOOPS! But you come to not much harm. Ah, so brown snakes are safe to catch. Too bad if that was a mole snake (which usually are black, but not always) and the next one you catch happens not to be the same thing at all, but a cobra; cobras often are brown, but not always.

Suppose you know nothing about firearms and find a pistol. You pull the trigger and: "bang!" Gosh! Do it again: bang, bang! Hey! Here is an obvious pattern! And again bang, bang, bang! So by empirical induction we obviously have a general law here! Pistols go bang!

Sooner rather than later: "click".

Hm .... Maybe we should check on mare's nests and hens' teeth again.

There is no end to such examples. A particularly poignant one is the turkey fallacy: every day for months on end the farmer appears at the door at the same time, carrying food in his hand. The inductive turkey soon concludes that farmer bearing food is a natural law, and he strengthens his conclusion every day by successful predictions. Being a statistically sophisticated turkey, he calculates each day the increasing degree of confidence he could put into the next prediction. Then the day before Christmas, the farmer appears as usual, but carrying an axe…

Not the same thing at all ...

The turkey had omitted to begin by proving that bearing food on day X need not imply bearing food on day X+1.

Failure to understand the underlying mechanism or situation may be as fatal in empirical induction, as failure to follow the rules for mathematical induction.

It should be clear that in the examples of inductive reasoning so far, the players had naïvely taken the underlying mechanism for granted, much as a savage would accept unquestioningly that dropped stones fall down — it is a fundamental fact; that is what stones do; what is there to question?

Is there any alternative? Well, what was missing was any clear conception of causal mechanisms; not so much the way things have happened, as what makes them happen, and how. Refusing to acknowledge that what happens so regularly that to deny that it is in the nature of things, is commonly perverse, and leads to painful consequences. I remind you again of Burke's remark in a different connection: "The nature of things is, I admit, a sturdy adversary ..."

Another aspect of empirical induction, that is not often taken into account, is that even when it is logically invalid to conclude inductively that what one has observed repeatedly must always happen, yet, seeing it happen repeatedly, does suggest that there is a probability that favours its happening. If we at first saw a certain coin land on one face at a time, and inductively concluded that it always would do so, a rare observation might unexpectedly confirm that it is possible for a coin occasionally to land on its rim. 

And yet, for most coins, we would find that landing on its face is the way to bet.

Most people accept that it must always land on a face, but that is abuse of induction; naïvely, they could estimate the frequency by tossing the coin thousands of times, but that is seldom fully satisfactory: the frequency of rim landings could change with the lapse of time, say becoming rarer as the rim of the coin wears down and becomes more rounded. 

Often in practice we accept the consequences of our reliance on that abuse of logic; we cannot spend all our lives chasing trivialities; but a major, major merit of induction is that it may lead us to make a study that reveals the causal reason why and how and how much the inductively described behaviour occurs in practice. 

This leaves us with a need to explain, not why empirical induction is so fallible, but why it so commonly is successful. The ways in which things might have happened are hardly limited in general, but many things, for many reasons, can only happen in a small number of certain ways, and, of that number, they more often happen in some ways than in others. So, having watched them happen a few times, we inductively infer that they behave in certain ways, even though we might lack any cogent proof of why they do that.

So for instance, we find that a cubical die, when tossed, soon settles onto one of its six faces, even though its first contact with the surface is usually by one corner. A coin however, especially if very thick, though it generally will settle on its face, may rarely settle on its rim, but never on the edge of its rim. This we can rapidly determine by empirical induction.

And similarly, when we toss an ordinary coin a few million times on an ordinary floor in an ordinary room, we wait in vain to see the falling coin turn into a die, or a soap bubble, or an ostrich, or float up to the ceiling, or in any way behave other than might inductively be expected of a coin, especially once we have come to understand the mechanics of the typical behaviour of tossed coins. This happens whether the observer is a modern physicist, or a naïve member of a rural tribe.

Even when the result is highly unexpected, this remains true. A pretty example is the so-called tippe top. The ideal tippe top is more or less mushroom-shaped, and on a level surface it commonly rests with the cap, the heavy side, beneath. However, once it is set spinning, it quickly overturns, lifting its heavier end against gravity, apparently reversing its axis of spin. It then settles into a stable attitude until its rate of spin decays too far to support it, after which it falls over.

 


 Without the actual experiment, there would have been the temptation to dismiss such a thing as impossible. After all, spontaneously reversing its spin clearly violates the principle of conservation of angular momentum and of energy. And yet, when we try the experiment in the expectation of more rational behaviour, induction rules OK!  At least until our patience or our toy wears out or our pistol runs out of cartridges or the turkey farmer brings out his axe ...

When something like that happens, we can be sure that we need to pay more careful attention to our theory, our interpretation, or our assumptions. We need to check for trickery, such as perpetual motion swindles or conjuring. If we can discount anything of that type, then it is time to reconsider one's preconceptions. Something has to be wrong somewhere, and if deduction fails to do the trick, then it is time to look to for abduction and induction as bases for fresh inspiration.

And sure enough, in the case of the tippe top, more careful observation shows that although the direction of spin from the point of view of the top inverts, angular momentum is maintained because, as seen from outside, (most conveniently from above, but suit yourself) the direction of spin remains unchanged from when it started, either clockwise or anticlockwise. Also, the rate of spin slows down as the centre of mass rises, and it does so to match the loss of kinetic energy against the gain of potential energy.

Very pretty, even elegant, but so far it gives no reason to abandon applied mathematics in physics.

But the history of science is rife with discoveries that were contradictory to the received wisdom — think of Galileo seeing Jupiter’s Moons; Newton's universal gravitation; Schleiden and Schwann's cell theory; the germ theory of disease that was the fruit of the labours of many workers; galaxies beyond local space; atomic structure; quantum theory; special and general relativity; Griffin's discovery of sonar in bats; there are hundreds of examples. What our successors will make of the reconciliation of quantum and relativity theory, I would love to know, but I suspect it will be one of the great examples in future history.

All the same, more and more of our current scientific discoveries, however sophisticated, arise from a perspective sufficiently wide to ease the acceptance of new advances. The bits increasingly hang together.

The limits to the ways in which things can happen are uncompromising, and every inductive conjecture based on observation and abduction, constrains the possibilities. Even if the reasoning from observation is not formally valid, it would be grossly irrational to infer that the conclusion must be wrong; it accordingly would be unsound to insist that research ("science") should be, or could be, purely and formally deductive.

In fact, in the physical world, science and reality never are purely and formally deductive; our interpretations and conclusions are plesiomorphic at best. And the underlying reality itself arises from causes far more chaotic than anything we can afford to waste our time on studying.

And such a chaotic nature of reality need not be bad; certainly it often works well in natural selection. Evolutionary strategies of wide varieties have turned out to be successful, sometimes for over a billion years. The best bet is generally that what you see once is the way things generally happen, and what you see most often is the way things generally happen according to underlying causes.

In the early 20th century the likes of Karl Pearson and Sewall Wright began to popularise the concept that "Correlation need not imply causation". Note the "need not". They were by no means the first to stress the point; it had long been recognised; but the popular view was to the contrary: that, as Thoreau put it, “Some circumstantial evidence is very strong, as when you find a trout in the milk.”

The facile, formally invalid, and commonly illogical, view, that tends to value, even assent to, circumstantial evidence, amounts largely to trust in empirical induction. The converse view, that causation commonly implies correlation, also has been recognised for many years, even centuries, but Jack and Jill Average commonly fail to sort out the significance. I discuss some of the considerations under the heading of "causal webs".

Abduction

The most exciting phrase to hear in science, the one that heralds new discoveries,
is not 'Eureka!' but 'That's funny ...'
Isaac Asimov

 

The further we venture into such matters the deeper we stray into the field of abduction.

Abduction is a form of reasoning that goes from an observation or a speculation, to a hypothesis. To illustrate the concept, I find it entertaining to refer to a story: “Breaking a Spell”, in the book “Odd Craft” by W. W. Jacobs:

 ...'e went 'ome one day and found 'is wife in bed with a broken leg. She was standing on a broken chair to reach something down from the dresser when it 'appened, and it was pointed out to Joe Barlcomb that it was a thing anybody might ha' done without being bewitched; but he said 'e knew better, and that they'd kept that broken chair for standing on for years and years to save the others, and nothing had ever 'appened afore.

True, true; but abduction need not be superstitious or stupid; practically any new scientific hypothesis begins as abductive speculation. And so does every new line of thought that arises when encountering a mental obstacle in exploring an emerging field. Examples might reasonably include the early phases of:

  • Newton’s work on motion and gravity;
  • Mendel on genetics;
  • Darwin on natural selection; and.
  • Periodic characterisation of chemical elements.

In fact it is hard to see how early exploratory work on almost anything could begin without abduction. And abduction, even in some of the greatest works of genius in our history, is invariably at least partly wrong. If things were otherwise, research could hardly be called research.

The various definitions and explanations of abduction tend towards incoherence and mutual inconsistency. I don't pretend that mine are compact or compelling but I wish to show at least how, as I see it, the very concept has several aspects, and that they need separate consideration. Whether anyone has assigned those aspects to separate categories with distinct definitions, I do not know, but, for our purposes here, I doubt it is necessary to do anything of the kind.

Part of the problem is that the different categories grade into each other. Pure inductive guessing has no compelling implication or basis to support its basic assumptions; pure deduction proceeds from premises assumed to be true.

As a concept separate from induction, abduction is comparatively recent; as far as I can tell, Charles Sanders Peirce coined the modern usage of the term, and as one of the best of our early modern philosophers of science, he began well. Popper for no obvious reason (I find it hard to believe that he never encountered the concept) never seems to have used the word, much less distinguished it from empirical induction, nor recognised its importance to research.

As a philosophical strategy, abduction bases its proposed conclusions on not necessarily perfect assumptions and not necessarily cogent derivations. It also takes at least three forms:

  • proposing causes to explain empirical information,
  • proposing consequences of assumed causes, and
  • proposing mechanisms by which assumed causes give rise to assumed outcomes.

This is just one point of view: it certainly looks vague and confusing, largely because it is vague and confusing.

So why even think about it?

Because that point of view is itself a good start to empirical thinking. Abduction is fundamental to investigation, whether scientific or not. Before Pierce ever described abduction and its proper use, human thinkers had been using abductive reasoning for thousands of years. Abduction is the basis of the starting points of most constructive or exploratory thought in science, management, common sense, and arguably also in creative work, whether artistic or technical.

Suppose, not being a turkey, you consider a visit to an orchard. You pick a fruit from a certain tree: a pear — ah, very good; pear trees bear pears! You go back for another and tell your friend how good it is; he asks where you got it from? First tree in that row. He comes back with a cooking apple, complaining about your misleading him. After increasingly heated recriminations, you both go back to the tree and sure enough, one branch bears apples and another bears pears.

 WHAT??? How ...?

Harvesting apples and pears from the same tree flies in the face of your sense of underlying biological mechanisms that you see as dictating that apple trees bear apples and pear trees bear pears. It differs from inductive generalisation from repeated observation, in that in the light of long history and research we also understand something of the nature of biology, in particular of heredity. We know very well why apple trees don't bear pears.

Then you discover that the tree was indeed an apple tree, but one that had had some pear wood grafted onto it.

Time to think again.

As one comes to understand more about the nature of the phenomena one studies, the nature of one's induction changes subtly, and mentally we develop into the field of abduction. Our turkey had been collecting data without comprehension; his conclusion was unthinking induction. Your assumption about pears and pear trees was at least based partly on some understanding of the underlying biology. What you knew, and thought you knew, formed the basis for a coherent hypothesis on which you could base deductions and predictions.

That is part of the essence of abduction.

The naïve commentator might call it guesswork.

And the following parable from the WWW is food for thought:

Researchers put several apes into a cage in which a banana hung above a ladder. An alert ape promptly went to the ladder to get the banana, but as soon as it touched a stair, they all were sprayed with nasty cold water. Every repeated attempt had the same result. Within a few days, each time a ape went near the stairs, the others would violently prevent it. The fuss gradually dissipated.

The researchers then replaced one of the apes with a naive one. The new ape saw the banana, and immediately tried to climb the steps. All the others attacked him. He soon learnt: forget that banana; stay away from the stairs, or get beaten up. The researchers then removed a second experienced ape and replaced it with another new ape. The newcomer in turn went to the stairs and got beaten up. The previously new ape, who had never seen the spray in action, but had been beaten, enthusiastically participated in the correction.

A third ape was replaced and the next novice learnt the same lesson from fellow‑apes in turn, two of whom had no idea why they must not permit anyone to get too close to the ladder. This was repeated till no ape was left who had ever experienced the water spray. Nevertheless, by then only novices would try to climb the stairs.

. . . One day a new young ape asks, "But Sir, why not?"

. . . "Because that's the way we do things around here, my boy."

One could say that the apes had encountered what Francis Bacon called the “Idols of the tribe”. They had to toe the line drawn by the community, whether it was comprehensible or not and whether it made sense or not.

The moral usually drawn from this parable is a sneer at the mindless way those apes did things round there, but on what basis might anyone suggest that humans would do better than apes in such a cage? And apes or humans, clever or stupid, then, unless the researchers had turned off the waterworks in the mean time, the first novice to buck the system would have reinforced Pope's lesson the hard way: "a little learning is a dangerous thing".

Common sense and logic.

If an animal does something they call it instinct.
If we do exactly the same thing for the same reason they call it intelligence.
I guess what they mean is that we all make mistakes,
but that intelligence enables us to do it on purpose.
Will Cuppy

As a matter of simple common sense we learn to distrust naïve empirical induction, let alone abduction, both of which certainly are common bases for fallacy; and yet empirical induction is the basis of most of our dealings with reality. Not only humans, but nearly all our sentient fellow‑species, work on the basis of learning what usually seems to happen as an apparent consequence of our actions, and of various types of events around us. A horse or dog that has experienced an electric fence a few times, may not understand electricity, but quickly learns to steer clear of that fence.

And it works! Naïve empirical induction works so reliably that it is the basis, not only of conscious learning and even apparently mindless reflexes and Pavlovian conditioned reflexes, but also of physiological adaptation to exercise or food supplies — even of evolutionary adaptation by mechanistic natural selection. By implication, all of them, mindless or not, "put their trust" in the consistency of events.

Induction is not mocked!

I'll discuss this under another heading, when dealing with the ideas of David Hume.

Meanwhile, the ape parable leads us further into the topic of abduction. Either as individuals or as a group, they were unequipped to develop theories about how or why touching those steps led to dousing or beating. Their induction was pretty nearly pure — it was one-dimensional. No blame to them; technologically unsophisticated humans would do no better, and dogs probably would not achieve even the preventive measure of punishing would‑be transgressors.

First‑world humans in a similar situation might however, investigate the set‑up for sensors designed to trip the showers, and for possible means of bypassing or inactivating them. Or they might look for missiles to knock down the banana. They might speculate on the motives of the creators and imagine ways to communicate with them, demanding: "Why are we here? Let us out or improve the comfort level! Stop maltreating us! Find some better means of communicating with us than squirting us!" They might brave the shower to get up the ladder in the hope of finding a way out.

Such reactions would require more advanced levels of abduction aimed at understanding the nature of their situation and of dealing with it. Less-educated humans might not do as well, or they might surprise us with unexpectedly more sophisticated reactions or mythology than first-worlders — I simply do not know.

The terminology is inconsistent, as I already have emphasised. The very word abduction in this sense is little known outside circles of professional philosophy.

In science, abduction is the major basis for initiating the new generation of explanatory hypotheses and for discriminating between them on the basis of available information or opinion. It is the basis too, for generating means of learning more about the subjects of speculation in an underdetermined world.

In technology abduction is the major basis for diagnosis of problems and developing solutions for them. Such abduction includes large categories of invention: recognition of the absence of something desired or presence of something unwelcome, and of options for improving the situation.

It is not proof in general.

It is not rationally presented as formal proof at all.

Conjectural lemmata

In science:
If you don’t make mistakes, you are doing it wrong
If you don’t correct those mistakes, you are doing it really wrong
If you don’t accept that you make mistakes you are not doing science at all.

                                    
Anonymous

Strictly speaking, a lemma is something proven as a step in the proof of something else. For instance one might prove a more general proposition before proving that your main, more specific, proposition is a special case of this concept, and follows accordingly. Suppose that I needed to prove that there had been a noise in the forest. Suppose that I can show as a lemma that any tree makes a noise when falling. Then, as proof that there had been a noise, I can present the observation of a fallen tree in the forest, where there had been no fallen tree before.

Well, in this essay I do not try to prove much, but I do at times urge particular opinions. To this end I first offer conjectures that I propose as persuasive; I might firmly believe them, or consider them possible or desirable or interesting, but in any case I see them as illustrative or probable or stimulating. I might for example say: "Look, I cannot prove that every falling tree makes a noise, but I have investigated a lot of tree‑felling, and so far every falling tree I have seen was noisy, so I conjecture that every falling tree makes a sound, and if my conjecture is correct I may conclude that where I see a fallen tree, there will have been a sound whether I heard it or not."

An illustrative fable tells of two brothers, one a pessimist, one an optimist. One Christmas Santa gave each one an anonymous gift. The pessimist received a case of single-malt Scotch: his reaction was "Oh no! What a hangover I'll have!" The other got a sack of horse manure: "Oh goody! Someone's given me a horse!"

When I propose something of either of those types I might call it a conjectural lemma.

For  one brother, the conjectural lemma was that he could not resist Scotch and that drinking Scotch leads to hangovers and that he now had enough Scotch for a monumental hangover; for the other, the lemma was the received wisdom that where there is horse manure there must be a horse, and he now had the manure.

Their specific lines of reasoning might be valid or not, but as long as they suit the point I am urging in the context of induction and abduction, that is all we need. If anyone encounters such lemmata and their consequences, and sees fit to categorise them in context, fine. But my conjectural lemmata are no more than illustrations or proposals, not proofs.

 

Gedankenexperimente: Thought experiments

If we knew what it was we were doing, it would not be called research, would it?
attributed to Albert Einstein

The apparent simplicity of the idea of a thought experiment, like many apparently simple ideas I discuss here, could hardly be more treacherous; there are whole books on thought experiments, and treachery is one point on which the authors agree upon with the least reserve. Almost any other point is up for debate.

And I agree with those authors because I derived the same view independently.

So don’t take my definitions and remarks too rigidly. A thought experiment, loosely speaking, is when one suggests or assumes, situations or states that actually might or might not be possible, and one deduces certain conclusions from them, or from particular axioms. The experienced reader in this topic will recognise several examples that arise in this essay; in particular, I mention some in the section on magic.

However, such magic is not the only kind of thought experiment one gets. Even the most obsessive scientist does not carry out experiments to confirm that every assumption that occurs to him, or that he relies on, will correctly predict a given material outcome, or will give a precise result. Instead one often can ask oneself what the result would be if one changed the assumption to...

Something different.

And the fact that it is something different, amounts to another thought experiment in its turn.

And more often than not, you can be pretty sure that you have sufficient reason to accept your rough conclusion as a working hypothesis. Even if you are not justified in your conclusion, you might be satisfied enough to skip checking it. Then, whether you have convinced yourself or not, you might pass on to consider a more ambitious formal or material experimental programme instead.

But every scientific experiment is essentially heuristic: you begin with a question and see which answers suggest themselves. Will hydrolysis of vinyl chloride give you vinyl alcohol? If not, why not? Will spinning a prolate spheroid around its long axis give a stable spin? Given a spacecraft manned by a team of astronauts, will shooting it at the moon from a cannon, be more effective than propelling the craft by rocket? On Earth, will clear weather permit a mountaineer with a telescope on Everest to see The Empire State building, as a flat Earth would predict? Will a pedal-driven propeller above one's seat be a basis for a working helicopter? Will heavier-than-air flight ever work? Will universal education solve all social problems in an egalitarian society? Will rotary engines work better than piston? Will faith move mountains? Will elimination of private ownership lead to a stable, productive, unselfish, non-competitive, undespotic society? Will natural selection lead to the emergence of new species? Will injection of particles into the stratosphere mitigate global warming? Will electric cars prove better than internal combustion? Will flying too close to the sun with waxen wings cause the wax to melt…?

And on, and on…

Every new development depends on assumptions about what we think we know, or what certainly is not yet known, and it is never certain how many assumptions are implicit, and which of those assumptions will matter, yielding either frustration or serendipity.

Sometimes just exploring assumptions and their implications will lead at first to incredible advances in theory, such as say, non-Euclidean geometry; some will cause gross upheaval of major fields of science, such as happened with relativity and parts of quantum theory and information theory. Some will build on partial understanding of reality, and provide advances and problems variously greater and less than expected, such as plastics, nuclear physics, and flight engineering.

In all of those examples, thought experiments played roles throughout.

In all of them, ignorance was a factor: ignorance of our facts, ignorance of our assumptions, ignorance of their combinatorial relationships; if ever there were no ignorance, there would be no need for experiment, whether in thought or in practice.

Thought experiments play their role in all intellectual advances, whether by deduction, induction, or abduction.

Deductive inference, valid or otherwise, is arguably the most implicit component of thought experimentation; there is always an element of: “Suppose this .... then that must follow .... ” The conception might be accurate, mistaken, or downright misleading, even meaningless, but the form is at least that of premises followed by conclusion. The reasoning might be reductio ad absurdum, or direct, but in either form it is formal deduction from assumptions.

Then there is abduction. You see something, and you base an idea on it (“Oh, a wooden ball falls more slowly than a stone ball ... I wonder…”), or an isolated thought occurs to you (“Oh, suppose that apple tree were taller ...”) or an apparent insight (“Practically every complex organism passes through different stages of differentiation, growth, feeding, competition, and reproduction in its life history, so each must undergo metamorphosis, more or less obvious, not just insects ...”). or “look at the way arbitrary shapes of rock on beaches or in potholes in streams, are tumbled into beautifully precise spheroids; the principle must be that salient irregularities get ground down preferentially ...”)

As for induction, very similar principles apply, as you can see if you work your way through the examples given in the section on induction.

In every case you work your way from assumptions and ignorance to conclusions or hypotheses of various degrees of usefulness. They might be very crude, though useful, such as flat Earth (works fine for large-scale local maps) or precise, such as Euclidean geometry (excellent for carpentry) or Newtonian physics (fine for non-relativistic, non-quantum systems).

Even now, though in some respects we have achieved some very high degrees of predictive power, the one thing we can be most certain of, is that we are nowhere near any finality.

And thought experiments are among the tools that underline our ignorance.

And do it most cheaply, and sometimes most quickly.

 

Infinity, Finity, and Cosmology.

The existing scientific concepts cover always only a very limited part of
reality, and the other part that has not yet been understood is infinite.
Whenever we proceed from the known into the unknown we may
hope to understand, but we may have to learn at the same time
a new meaning of the word ‘understanding’.
Werner Heisenberg

Entities and atomism and not much confidence.

You can't prove anything about the physical world by logic alone.
Anonymous

This is where things start getting messy.

If I had lived in the time of Democritus with his idea of atoms, I suspect that I would have been one of the sceptics that rejected his silly assumption. As far as I can make out, he said that there had to be a particle smallness beyond which it was impossible even in principle to divide an object any into any smaller particles. The sceptics argued that on the contrary, whenever it was logically possible to split a big lump of something, it had to be logically possible to split a small lump. Consider apparently amorphous cheese, or water droplets, or cleavable salt crystals for example: obvious, isn't it? 

Analogous to halving a line in Euclidean geometry: one can repeat the operation forever, getting new lines or droplets or crystals half as large each time.

I was grown before I began to change my mind. I had long since accepted the concepts of molecules and atoms, but still rejected the reasoning of Democritus, or the lack of it, though what he actually said was more sophisticated than usually is mentioned in class.

However, the more I saw of how the world works, the more pervasive the concept of atomism became. Not just in contemplating the chemical elements, but in biology, physics, logic, and practically everything. Without going into detail, two modern ideas concerning atomism are of immediate interest. Neither of them deals with what we commonly call atoms (which strictly speaking are not "atoms", anyway — and certainly not in the sense intended by Democritus and some of his associates).

The first of the two modern ideas deals with ultimately indivisible particles. I suggest as a matter of nonessential opinion and no more, that such particles might (or might not) correspond to at least some of our currently perceived elementary leptons, quarks, bosons, and the like.

I might be partly or completely wrong here, but, as I see it, the validity of the concept of ultimately indivisible particles does not greatly matter in our context: all I am concerned with is the idea that particles in our world are not turtles all the way down, that not every particle is a structure of sub‑particles and then of sub‑sub‑sub particles or fragments.

That denial is not a doctrinal assertion — it is no more than an opinion, with its implied assumptions about our world of perception and whatever passes for the reality underlying it. I quietly take it that there is a stage beyond which the behaviour of what we see as points or physical point‑like particles, goes no further. There is where Good Ol' Dick, my chosen bottom turtle, finds his level. Underlying realities reduce to whatever constants reign at that ultimate level and Nature itself goes no further.

I offer that view without proof: it is not formally axiomatic, but it is assumptive. I comfortably reject any opposing assumptions until their proponents can present them cogently, or at least with strong experimental support. After they achieve that, I shall be appropriately astonished and happy to acquiesce.

For example, last I heard, no one had yet managed to cleave an electron neutrino or an electron, not even in the likes of a double slit experiment. In contrast some discussion was under way, about whether quarks were compounded of sub‑particles. Whether they are or not, I shall assume that there are such things as particles that are as much indivisible and point‑like as might be possible in our world. That will do well enough for our purposes.

That is what I assume in full awareness of concepts such as the attributes of wave‑like and particle‑like behaviour; although those concepts are no longer respectable in their naïve form: atomicity is the concept I am referring to here, and atomicity is fundamentally unaffected by those considerations.

So, such particles amount to "atoms" — for our purposes: "indivisibles".

At this point I introduce a neologism: I started writing about splittable items. as being “non‑atomic”, but the clumsiness of talking about what amounted to “non‑non‑splittable” items became irksome, so I have changed all those references to “tomic”. I cannot find that usage anywhere in physics textbooks, but the term is convenient, so I present it here. My apologies to anyone who hates the word, but feel free to choose your own terms when you are the author!

More relevantly however, there are at least two separate and distinct senses in which I use the term “atomic”, and they do not have much to do with each other:
. . . firstly: not being tomic in any sense, and
. . . secondly: being the fundamental particle of a chemical element.

So for example, an alpha particle is the nucleus of a helium atom, which does not mean that it cannot be split by suitable application of force.

This is where the second of the two modern concepts concerning tomicity arises. I will discuss it in more detail later, but let's introduce it here: some things that one can split physically, none the less cannot be split without changing their nature. Our familiar atomic nuclei certainly are atomic in the sense of being atoms of elements, but not in the sense of not being physically tomic: not only do their nucleic structures consist of hadrons that can be separated or can clump or interact in particular circumstances, but the atoms' nuclei can shed, shift, collect, or share outer electrons in chemical reactions or electric fields, and their nuclei can spall or split or grow when hit by suitably energetic particles of the appropriate nature, so they are decidedly messy structures.

But if you split a nucleus you do not get the same sort of thing that you started out with. Suppose you evenly split an atomic nucleus of sulphur (possible in principle, though hardly practicable) you do not get two smaller helpings of sulphur — you probably do get at least two atoms all right, but probably two atoms of oxygen, not smaller atoms of sulphur. One atom of sulphur is the smallest helping of sulphur you can get, much as half a pair of gloves is not a smaller pair of gloves.

The whole topic is very messy in fact, because, while some of the candidates for atomic status, some leptons in particular, might be indivisible for all I know, most of the particles of interest are compound, whether divisible or not.

Let's not go into that — not till later anyway.

Atoms in spaces.

"The fool hath said in his heart that there is no null set.
But if that were so, then the set of all such sets would be empty,
and hence, it would be the null set. Q.E.D."
Bas van Fraassen

Now I descend into hand waving, fables, and speculation. Imagine some empty space of indefinite extent: in effect a universe that has nothing within its event horizon, except space. Whether that space has a horizon, and whether the horizon is a yoctometre or a yottametre away, or whether it has any form of horizon at all, we do not yet specify. It is not even clear to me that distance, or even direction or time, could be meaningful concepts in such a universe, so I do not discuss them here.

It also is not at all clear that “space” is "nothing", but let that pass for now.

That imaginary feat demands more (or perhaps less) imagination than you might like to deal with; I am not even sure whether the whole idea in itself is meaningful or not —for example, would such a space accommodate vacuum fluctuations or not? And if it did, what sort of particles could one expect could fluctuate into and out of such a vacuum? If it could, then could that universe accommodate more than one fluctuation at a time? And if it could, could multiple fluctuations interfere and create complex structures that could no longer fluctuate? If you happen not to be familiar with vacuum fluctuations, don’t let that bother you — this is an abstract exercise, so bear with me.

In such a universe outside observers are among the things that are excluded, and so are photons or other particles that could carry information, so we could never really see anything. After all, if we were there to watch, that universe would no longer be empty space, right? I rely on a magic God's‑Eye‑View (call it G‑E‑V), an ability to see things without needing to reach in and disturb the things we look at, and to do so without needing to transmit information. To the best of my belief, this is absolutely impossible even in principle and I remind you that it is purely a thought experiment.

Now, to begin with, imagine that into this imaginary universe we release a population of identical notional particles, each of which behaves in ways consistent with Newtonian momentum: if a particle is moving in a given direction, it continues in that same direction at a constant velocity. That is very much as particles behave in free space in our universe, but in our thought experiment the particles are not subject to the effects of gravity or any other accelerating influence from each other. The only way that the behaviour of the particles in our thought experiment is unfamiliar to us is that they have absolutely no effect on each other's presence. They don't bump, or exchange photons, and their paths do not curve in each other's electromagnetic or gravitational fields. If they have spin or anything like it, they ignore each other's spin. They certainly are not Fermions, and it is not clear to me that they could be Bosons.

In short, each particle behaves as though all the others do not exist at all — not even slightly. In fact, in such a notional universe none of those particles really does exist as far as the others could be concerned: they do not exchange information; no events result from their mutual interaction or existence. Only an observer with a suitable G‑E‑V could recognise that they all have coordinates inside the same space.

All the same, with our G‑E‑V, we can observe one important and impressive thing about those particles in that space: they do have intrinsically consistent behaviour.

It follows that their behaviour is constrained: there are things that they do, and things that they do not do.

In any universe, anything that happens, anything that is done, is an event, a change: a change in time, with some situation before and some different situation after. I could state as an article of faith, that the change in situation will imply a change in entropy and information, and I suspect that its very occurrence has to do with the definition or creation of the passing of time, of time's arrow. But I am so vague about the very meaning of the terms and arguments, that I shall not urge them. Instead I pass them on as not much better than suggestive hand waving.

But I do so without much apology. Key questions and key suggestions often are more important than key answers, which is an idea that I am not the first to suggest. Since I cannot know in advance which suggestions and questions are key to anything of value, I ask first and argue after, if at all.

Note that it does not follow from the fact that these particles do not affect each other, that they must lack all other attributes that could affect any different notional particles at all. For example, in principle we then could introduce some different kinds of particles that do affect the behaviour or trajectory of one or more of the inhabitants of that space, and in doing so are affected in turn. Such new particles might combine with the original particles in forming structures that, unlike the original unstructured particles, could indeed exist from each other's point of view, meaning that their compound structures do affect each other, whereas the simple particles did not affect each other.

Notionally one could imagine particle types that cause all the other particles into mutual recognition and interaction. Somewhat like fastening little magnets or scraps of camphor to floating corks that otherwise would ignore each other.

This last point, of particles that enable other particles to interact in particular ways, is not as fanciful as it sounds — if the idea interests you, you might like to read about say, Higgs' bosons and gravitational attraction, possibly in Wikipedia.

It would not be difficult to program cellular automata that model some such universes, but note this important reservation: one could program similar behaviour as resulting in different ways from different rules. Given such a possibility, any such universe might resemble an indefinite number of other model universes with similar behaviour, without really being the same. Each such a universe with its own set of visible rules might look exactly similar to the others, but if one does not know the underlying rules, one could not be sure whether they are invisibly different. Accordingly we would indefinitely remain uncertain whether our prediction of the next move would be correct, or whether two automata suddenly would diverge from behaving identically.

Just bear that in mind for now; I discuss the grue/bleen paradox later on.

Anyway, in such a universe one could not distinguish, constrain, or characterise the alternatively possible underlying programs ("realities") by inspecting the behaviour of the individual independent entities unless one either could experiment with them, interfering with their behaviour, say by introducing new classes of particles that interact differently with the different components, or by varying the entities' interactions, or could observe a wide range of interactions to seek effects that constrain the range of possible explanations or mechanisms.

When we cannot control the interactions, we are reduced to observation; that is the situation in cosmology and astrophysics for example: we cannot go out there to say, smash stars together, so we just have to watch more and more stars in the hope of seeing something that narrows the range of reasonable explanations.

Down here near Earth's surface, we are in a better position to interfere, and such interference for the sake of gaining information, we call by names such as “measurement” and “experiment”.

By way of analogy, the behaviour of floating corks with suitably attached magnets might be difficult to distinguish from corks with suitably attached electric charges, and lights that flash so that they seem to jump about in the dark might be very difficult to distinguish from lights that do actually jump from place to place, unless one indeed undertook research by physical intervention (experiment) or by mathematical analysis.

Be that as it may, all such ideas are based on the assumption that some form of underlying reality supplies the constraints. However, all visible patterns of behaviour are underdetermined, meaning that there always are multiple distinct possible underlying realities or preceding events that notionally could in principle account for what we see. Effective research would involve varying the conditions so as to change the behaviour in ways that would exclude some of the possibilities, or reduce some probabilities, thereby reducing the underdetermination to some extent.

Another complication is the question of what counts as an underlying reality or origin of the system we see about us. There are whole classes of such concepts and suggestions. For one thing, the rules could be temporal, that is, time‑based: whatever exists, there must be something that existed before it, so either there never was a beginning (turtles all the way down), or it was Good Ol' Dick (turtle number 666666, with nothing below), or there was a Big Bang, before which there never was anything, not even time. Then there certainly could be no more turtles, so there never could have been a "before", any more than there could have been anything north of the North Pole.

And the same could be true of a West‑ or East pole. (Try working out why!)

Or an underlying reality could be topological, a looping history in which the universe reaches back to create (re‑create?) its own beginning. Like Ouroboros?

Or a Phoenix?

Why ask me? I don't know… 

You might find it amusing, perhaps even stimulating, to contemplate a concept of which I was not the author: Why did the chicken cross the road?

To get to the other side.

Very well, then why did the chicken cross the Moebius strip?

To get to the other. . . err. . .to stay on the same. . . Well, never mind!

Again, underlying realities could be rule based: the very concept of nothing existing might in fact prove not to be self‑consistent in practice, so that there always had to be some universe, no matter where, why, how, or what.

We might find that it is terribly difficult for nothing to happen, or nowhere to be — ever.

And also, the very idea of a universe with no rules, whether self‑generated or not, might in itself be self‑inconsistent. And the rules in question might simply follow from whatever it turned out to be that constituted that universe and those rules.

For example, consider our first little toy universe with its non‑interacting particles: with our G‑E‑V we could see all the particles travelling forever at the local equivalent of constant velocities in their geodetic paths: the logical equivalent of straight lines.

That sounds intuitively minimal, but in fact it takes for granted more assumptions than we might think of at first; for instance coordinates: for particles to move, they must change coordinates — changing coordinates is part of what moving means. And if they do not exist relative to each other or the magic outside observer, they don’t really have coordinates in their own universes. And it assumes continuity of the identity of at least some classes of entities ("that particle moving over there, is the same one that we saw over here before its coordinates changed: it did not in itself change in nature or identity when it moved"). Shades of Zeno’s paradoxes!

This raises thoughts of Heraclitus with his: "No man ever steps in the same river twice". It assumes some aspects of conservation of inertia, some aspects of the flow of time, and certain constraints on the patterns and states of the behaviour of the particles (such as that the change of coordinates is continuous: a moving particle's next coordinates will be right next to the current one, no matter how fast it moves).

Heraclitus also said: "panta rhei" (everything flows). Personally I suspect that to be an even more evocative line of thought.

But anyway, suppose that instead of the minimal attributes that we had assumed for our particles, we add a few more assumed attributes, such as individual masses, charges, rigidity, fields, radiation (electromagnetic, gravitational, and so on). Then all sorts of new things happen in the behaviour of the particles. They change their velocity (that is to say, they undergo acceleration, and may deviate from straight paths), they exhibit effects of forces and momentum. They attract or avoid or repel each other, collide and recoil or fuse, and generally begin to exhibit or participate in all sorts of events; they exhibit causal behaviour, doing things that, rightly or wrongly, but not necessarily unreasonably, we come to think of as the physical consequence of their respective natures, states, and coordinates.

These things happen because they begin to find things in their universe that for them had not existed before. Such constraints and events and manifestations give rise to concepts such as of causality and the consequences of the natures and individual circumstances of the entities.

In short, we find ourselves able in principle to predict following events (to extrapolate inductively, if you like) in the light of earlier behaviour: we observe, and can infer: cause and effect, or at least event and outcome, with precision limited by the amount of information and computation at our disposal or in existence in that universe. We find degrees of consistency of behaviour within circumstances, from which we can deduce, or at least conjecture on, some of the constraints on their behaviour, even if we cannot guess how far down the stack of turtles we need go to establish a full comprehension and explanation.

For all we know, we might not need any stacked turtles, just a very few self‑defining consequences of nothing.   

What does it take to make a dimension?

The important thing in science is not so much to obtain new facts
as to discover new ways of thinking about them.
Sir William Bragg

Now, think again of our magical empty universe that we have postulated, or possibly created: suppose that somewhere in that universe we magically release a solitary electron (as a convenient example of a presumably fundamentally atomic particle). With only that one particle in our toy universe, it still is not clear that direction or distance or time as such have any meaning at all. Without distance the very concept of a line, a one‑dimensional line, makes very little sense. Without any points of reference, we cannot say much about that electron, except perhaps that it is notionally immortal as far as we can tell. We cannot in principle say where the electron is nor even say meaningfully when it is, or what its momentum is, because in its space and time its only location is where it is; there is no other point of reference from which we can say: "That way!" or “Then!”, let alone give any coordinates that would amount to Euclidean points.

Whether "space" has any meaning in a universe without that first electron, I cannot say. I cannot even say whether it makes sense to speak of space at all where there is exactly one particle, never mind no particle. And even that is on the assumption that we, outside that universe, have some sort of G‑E‑V of our universe, a view that enables us to see our particle or particles wherever they are, without any observer effect or information creation to mess things up.

In other words magic.

In such a universe, I do not see how we even can say whether that lonely particle is moving or not, except that it is hard to imagine how it could be moving from where it is. I suspect that we cannot even say when it is. I am unsure how to give it any sense of time or vibration or anything in a universe in which no events can occur, but I leave such puzzles and definitions to the physicists, or perhaps to philosophers of physics. I am not even sure whether in such a universe concepts such as force or energy would be meaningful. Probably we would need extra concepts to specify them.

As you can tell, thought experiments are not necessarily as simple as one might expect.

If instead I had chosen a proton, things might not have been so simple (or possibly they would have been simpler), because a proton is complex: it contains three quarks and their associated gluons at least. Accordingly, a proton is not truly point‑like. But I had not chosen a proton, but instead, a notionally atomic lepton, so let that wait. And I am ignoring the spin of the electron, because particle spin always did leave me confused, even in our empirical universe.

It still does.

Given also that the wave‑like behaviour of an electron is hard to conceive in an empty universe, I ignore it similarly.

Now let us give that notional solitary electron a friend: another electron also not at any particular location or with any particular velocity, but so that each is well within the other's observational horizon. That means that they can interact, and will do so according to particular rules. Causal rules if you like. Perhaps those two electrons are parsecs apart, and perhaps microns apart, but they are not in the same place and in the same state at the same time. To simplify that concept, let’s assume that they have the same spin, which, according to the Pauli exclusion principle, will forbid them from being exactly in the same place.

In any case they will affect each other's trajectory and velocity, and will react appropriately to each other's spin and mass. Before there were two they did not even have anything like trajectory or velocity, or even history (whether spin and charge would have any meaning in isolation, I cannot even guess). But given two electrons, they must interact by gravity and electromagnetism and spin at least. Whether they do so by the intervention of Higgs bosons or other obscure particles, and whether those particles existed before there were two of our electrons, or whether just having a universe implies that they can pop in and out of existence by vacuum fluctuations, I cannot guess either, and will not pursue.

What does matter is that having more than one particle suddenly lends new dimensions to the universe.

New dimensions? Were there fewer dimensions before there were two particles?

In a universe of exactly zero particles, or exactly one particle, I am not sure how to make sense of the concepts of dimensions of up, down, sideways, forward and back, of rotation, pulsation, and passage of time, but with two particles there certainly is conceptual room for things that perhaps made no sense before there were two particles, particles that might love or loathe each other, might shove each other apart or draw each other together — effects that could result in accumulation, collision, repulsion, or annihilation. Distance now begins (only begins!) to gain meaning, in the sense of being represented by the line one notionally could draw between the particles.

And similarly time begins to gain meaning, whether it had had meaning before, or not. Momentum and energy might begin to make sense once there are two particles, even if the only momentum were nothing more than momentum towards or away from each other.

However, such terms still lack some of the meanings that they have for us in our richer universe. Even if you have not realised it yet, just these consequences of adding a second particle have changed that universe in ways more complex than I for one can assess. Whether one could argue that the universe began with the potentiality for such things, I cannot say, and it is not clear to me that anyone could say it with authority — certainly not without first stating some fundamental assumptions, assumptions that as far as I know, are not yet compelling — anywhere.

Magic is treacherous stuff.

I disclaim any sophisticated mastery of art or appreciation of art, but some of my favourites among artwork are those of Maurits Cornelis Escher. Many professional artists sneer, and jolly good luck to such in their naïveté, but I am unable to think of anyone else’s work that rivals his for sheer substance in various aspects that I personally value. And of these, the one I find most striking is the ability to suggest and capture dimensions and universes.   

Consider these two works:


 

The first shows a tessellation of angels and devils, an art form of which Escher was a master. The striking thing about such tessellations is that, although the two populations occupy the same surface — the same space, as it were, then when one looks at any individual, whether angel or devil, the other population as it were, fades into the background; it is hard to see both clearly at the same time.

One can imagine this as being like two universes interpenetrating each other, each being unaware of the other.

  * * * * *

 


 

The Rimpeling (rippling) picture is more subtly creative, each addition contributing to new aspects. It begins with a disk of white on grey; this could represent almost anything. Then a tangle of black branches partly obscures the disk, suggesting the moon behind trees. But the trees are upside down: that suggests a reflection — well and good, except that there is nothing in the picture to represent a reflecting surface. Also, if it is a reflection of trees, those trees themselves are not in the picture; any trees the picture suggests must be beyond the top of the picture. But the reflections of some of the branches are distorted in ways that suggest that we see their reflections in water, just after the surface was disturbed by waves created by two drops of water. But neither the drops, nor the waves, appear in the picture, any more than the water surface does.

All the effects we see in the picture are vivid and precise, but they emerge as indirect products of items that one tends to overlook. To me they seem analogous to the way that introducing entities into an empty, or at least sparse, universe, can create structures and processes not at first foreseen.

 

What Then, Is Physical Algebra?

 The best book on programming for the layman is "Alice in Wonderland";
but that's because it's the best book on anything for the layman.

Alan J. Perliss

Why should all this fuss, this groping after fundamentals, be worth the bother?

Because it puts us in a better position to think in terms of developing classes of algebras of physics. As I defined algebras before, whether mathematically or formally, an algebra is a set of objects or object types, plus a set of operations on those objects. One could regard games such as Chess, or Conway's Life, or Go, as algebras; the objects are the pieces, the boards and so on, and arguably the players; the operations are the rules that define the valid moves.

If we extend the idea of algebras to physical universes of objects and operations, or interactions that affect their behaviour, then that can enable us to do some things that make it possible to deal with some stubborn philosophical challenges from past centuries.

For an example of an algebra of physics, consider the universe of particles that I imagined as acting only according to their momenta. That would represent an algebra, though an impoverished one. A more substantial example would be the behaviour of matter according to Newtonian physics, such as we might find in an introductory textbook.

Note: the fact that we use a lot of numeric algebra in Newtonian physics is not the reason why I speak of an "algebra of physics": my reason is that Newton's physics involves items of matter and energy (sets of objects) and rules for how such items interact in particular types of events (operations on the objects).

And the question of whether Newtonian physics represents our universe perfectly or not, is irrelevant to whether it may be seen as an algebra, or whether it is useful to express it as an algebra.

To think in terms of forms such as algebras helps in dealing with concepts of: information, measure, logic, implication, hypothesis, truth, probability, numeric description of physical realities, or consequences of potential states and events. It also gives us a basis for reasoning inductively and abductively in science, instead of shackling our conceptions by restricting ourselves to deduction (and commonly invalidly at that).

All of those concepts are manifested or modelled in the states and disposition and nature of the physical objects, states and events: this is why I assert that mathematics, logic, philosophy, and information, are manifestations of physics and attributes of entities in physics — they are not the basis of physics. They are instead implied by the nature of events in physics. Even if we abstract them (meaning that we copy aspects of them, or something sufficiently close to those aspects) into isomorphisms or plesiomorphisms of the primitive entities, those morphisms need physical representations if they are to exist at all, ever, anywhere, anyhow, whether accurately or otherwise.

Is that important, do I hear you ask?

Think about it: for one thing, such a set of objects and operations puts us in a position to speak meaningfully in terms of what to expect from past events, and what to extrapolate into future events. It also permits us to estimate the relative probability of past events being the cause of present states. In other words it enables us to attribute causes and effects and implications and developments, instead of waffling ineffectually about statistically correlated observations.

It permits us to model situations, either symbolically or materially. It does not guarantee that our conceptions are exact, though it might suggest how far from exact they might be, and commonly it permits us to deduce to what degree our interpretations are incomplete, imprecise, or no better than convenient fictions.

And, as I shall show later, they introduce the concept of emergence, emergent behaviour, emergent phenomena, and similar terms.

Importantly though, a physical algebra need not be deterministic, even when it implies causal behaviour: even the most precise physical interactions between physical entities involve certain physically non-zero uncertainties, whether because of quantum mechanical considerations, or because of classical limits to the availability of the information necessary to determine the outcome. So, for example, in symmetry-breaking events, such as the toppling of a sharp needle that had been balanced on a smooth, hard surface, a typical physical algebra tells us that the needle will topple, but cannot tell us in which direction.

A sufficient reason why it cannot tell us is because neither the physical algebra, nor any conceivably pre-existing information, that is to say: any parameters, can offer any implication; until the symmetry is broken, there is no future for that event — the reason that the future does not yet exist, is that the information defining it does not yet exist. Whether it exists after the event, is another matter, but underdetermination reigns there as well.

As I see it, fundamentally non-determinate events, such as symmetry breaking and quantum-random events — that is to say: truly random events, make nonsense of any proposal, whether physical or philosophical, that space-time already determines any subjective future.

As for predictability in QM itself, a quantum particle in a superposition, contrary to common belief, is not really in two (or more) states at once. Rather, a superposition means that there is one state, with more than one possible outcome of a measurement, but that successive measurements of the same state have a very low probability of contradicting each other. That is one form of the consequence of the non-existence of the information prior to its creation by the passing of the event that eliminates the alternatives.

That all sounds very simple of course, but it may become a little confusing from the points of view of observers who exist many light years — or light-aeons — apart, but remain in contact by continual exchange of messages.

 

Cause, causality, and implication

Thus it seems Einstein was doubly wrong when he said, God does not play dice.
Not only does God definitely play dice, but He sometimes confuses us by
throwing them where they can't be seen.
Stephen Hawking

Cause, causality, and implication constitute a very vexed field. About all that one can be sure of is that whatever claims anyone makes about them, will cause some other people to contradict the claims. In fact the contradictions commonly will be so various and categorical that it is hard to characterise the field of discourse coherently. There is a growing field of study known by various terms such as "causal inference", but of course academic philosophy is slow in assimilating it.

So I won’t yet try to deal with the field definitively — not very hard anyway. And I certainly lay claim to very little originality in my discussions. The closest I come to originality is by not agreeing fully with anyone.

Including any of my own earlier conclusions.

The concept of cause

"Who art thou that weepest?"
"Man."
"Nay, thou art egotism. I am the scheme of the universe.
Study me and learn that nothing matters."
"Then how does it matter that I weep?"
Ambrose Bierce

 

Cause is such a common concept that it seems too obvious for definition. As a concept however, it is treacherous. Many philosophers argue, though not generally compellingly, that it is not compellingly definable at all.

Really serious arguments on the topic began more than 2000 years ago and some of those are not settled yet; in fact, very little about cause and causality is completely uncontroversial even today. I do not try to outline the field, let alone cover it; I just deal with aspects that occur to me here.

First let's discuss David Hume's argument of more than two centuries ago. He creatively questioned the views of major writers who preceded him. His own ideas developed during his career, which makes it hard to attribute any fixed views to him, and a generation gap of more than two centuries makes it tricky to interpret some of his terminology. For example, his use of the word "induction" seems to have been vaguer than I, for one, prefer.

Be that as it may, Hume argued that our only support for the concept of cause, in particular the concept of specific cause as we would see it today, is inductive: inductive along the lines of those apes in the cage. This was not all he said on the subject, and what he did say was a good deal more sophisticated than that, but it is the relevant aspect here. He rightly pointed out that, as we saw when banging away with our pistol, the fact that some particular thing appeared to happen in one particular way in the past, is not in itself any proof that it will happen in the same way in the future.

However, in his chapter: "Rules by which to judge of causes and effects" Hume at the same time asserted essentially the contrary, among other things: "The same cause always produces the same effect, and the same effect never arises but from the same cause." His wording in such topics was barely coherent in our current terminology however, and so was his reasoning.

Part of the problem is that a lot of physical and logical concepts that we now take for granted, were not well conceived in his day, so I am not sure how literally he meant that. After all, in his day such errors were at least partly reasonable, coming as they did, before subsequent advances in physics and mathematics; maybe in philosophy too. At that time they tended to take infinity for granted as a single concept, their concept of information was poorly defined, and entropy was not yet a word, let alone a posy of concepts.

In our day, when we assume an algebra of physics, or guess at such a thing, we can hardly be more than partly right at best, and perhaps dead wrong. To me, Hume's writings in reducing the concept of cause to something like mindless induction, seem to be opposed to the very idea of that sort of algebra. But even if his views had no merit it need not follow that rival theories of his day, or of our day for that matter, are closer to anything like reality; there always are more ways of being wrong than of being right.

And yet, it also seems to me that he was groping in the direction of what would have amounted to something very like an algebra of physics.

Not that I think he would have been happy with the concept of there being more than one type of algebra; he wrote his "Enquiries Concerning the Human Understanding" and related works roughly between 1748 and 1777; In those days the authoritative view of Algebra was largely that of the first edition of Encyclopaedia Britannica, whose article opened with the following description (pretty well matching the concepts we were taught when I was in high school):

Algebra is a general method of computation by certain signs and symbols, which have been contrived for this purpose, and found convenient. It is called an Universal ARITHMETIC, and proceeds by operations and rules similar to those in common arithmetic, founded upon the same principles But as a number of symbols are admitted into this science, being necessary for giving it that extent and generality which is its greatest excellence, the import of those symbols must be clearly stated.

In geometry, lines are represented by a line, triangles by a triangle, and other figures by a figure of the same kind: But, in algebra, quantities are represented by the same letters of the alphabet; and various signs have been imagined for representing their affections, relations, and dependencies ...

Not actually wrong as it stood, of course, but a far more limited concept than we apply today.

One way or another, Hume, a brilliant thinker in his day, rejected the view that was dominant in the mid‑eighteenth century: as he saw it, the concept of what I have called empirical induction was formally invalid.

And in his formal terms it certainly was invalid, but as I have pointed out, to apply formal principles in empirical science is hazardous at best — it tempts one into unsupported assumptions.

Note that Hume himself cautioned his readers: "And as the science of man is the only solid foundation for the other sciences, so the only solid foundation we can give to this science itself must be laid on experience and observation."

Let's think about how he blundered in his own key conclusions. If your formal assumptions, conclusions, and predictions are at odds with your observations, we cannot necessarily tell immediately which are wrong, but we can be sure that something needs adjustment.

Hume was thinking in terms of causality, the principle of cause and effect. He did not clearly distinguish between causality and determinism in the way that we do in our time, and it would have made little difference to his views if he had. I will speak of causality only as the idea that: if any of a certain class of things happens or a particular condition obtains, then a particular class of event, a particular type of outcome, is likely, as implied by the nature of the algebras of physics.

And furthermore, the outcome generally will differ from what would have been the case in the absence of the notionally causal event. It need not follow that every such outcome will be identical, and in practical circumstances it is not possible to define identical initial circumstances, so the question is largely academic, but we certainly can apply the principle effectively enough to live in a world that seems fairly comprehensible.

Make no mistake: it does not follow that every such correlation is causal, nor that correlation has to be 100% if it is indeed causal as we understand causality: when I breathe pepper it might not make me sneeze every time, but I still regard the, say 90%, correlation as causal, even though I either may or may not sneeze whether I have recently breathed pepper or not; and I can support my opinion by studies of the stimulant effects on the nerve endings in my nose, of certain essential oils and other influences.

In other words, I can adduce successful predictions in support of the idea that the pepper commonly makes me sneeze.

This might sound like ignorance of Popper's principle of falsification, but, because I reject the principle, as I shall justify in due course, that is not relevant here.

The first and most basic concepts of what I called algebras of physics are the modes of behaviour of sets of recognised object types, plus sets of recognised operations on those objects. The operations in this connection might amount to actions internal to an entity: the things it does by its own nature, or, on the other hand, we might be looking at interactions between entities: the things they do to each other.

Assume for example that we observe empirically that under given circumstances, particular particles interact in apparently more or less consistent ways, that is to say, with characteristic changes of state or interrelationships: throw potassium into water, and it fizzes, bangs, and flashes; hit a glass with a hammer and it breaks; permit an electron and a positron to collide, and they annihilate each other. No such example happens exactly the same way twice, even if we could repeat the action exactly in the same way, which in fact we cannot. But still, they do happen after their kind, time after time.

In principle we therefore can base an algebra on the assumption that such particles and their interactions represent some set of objects and operations that may constitute our relevant algebra.

Now, this is partly open to criticism on the grounds of terminology, or if you like, semantics. I shall not go into the matter in detail (there are whole books on the subject, books on semiotics and on the philosophy of science) but I discuss the main points superficially in the following paragraphs.

Notional examples of physical constants or theorems in the algebra of physics, variously defensible, might include the likes of "electrostatic like charges repel", "unlike charges attract", "gravitational masses always attract", “their strength of attraction is inversely proportional to the square of their separation”, “F=ma”, “rigid bodies cannot occupy the same space simultaneously”, and so on.

From this point of view, cause in an algebra of physics is the relationship between the objects operated upon or participating in an operation given their states at the start of the operations. The effect comprises the output states, the results, after the operations. In some cases, in refutation of Hume's pronouncement, the output states are not precisely predictable, even in principle, and then it might be better to speak of something like cause and outcome, rather than effect.

But it is not clear how much more useful that might be; not clear to me anyway.

But the fact that such an algebra might not be precisely predictive in all cases does not invalidate the concept of an algebra. There are indefinitely many examples, even in formal algebras, where outcomes of operations are not unique.

This resembles, or is analogous to, operations in formal mathematics, in which say, integer division of any natural number by any natural divisor results in a quotient and a remainder which might or might not be zero, depending on which numbers were chosen, or which tangent to a line might be chosen by a given operation at a non-differentiable point.

Analogously, suspension of any suitable object, such as an apple, in a suitable gravitational field, such as we experience in an orchard, results in ("causes") the proverbial effect: the fall of the apple once its suspension fails. Failure of such suspension is a complex process or event, and is not precisely deterministic.

As I shall discuss, this view obviously is simplistic in various ways, quite apart from the naïveté of such convenient ideas as "falling down", rather than "accelerating along a segment of an elliptical trajectory" or something similar. Nor does it in everyday terms address concepts such as precision, unambiguity, and noise ("neglecting air friction" and the like). However, it is just an illustration, so I beg patience, if not pardon — there are more immediately relevant considerations.

There are all sorts of logical objections that conceivably could be raised against the idea of establishing, or even defining, cause, and indeed many have been raised from time to time, more or less independently, with various degrees of success, by many people, in many contexts. Examples of such objections include: what looks like cause might be coincidence, or delusion, or misinterpretation, or a transient or ill‑defined effect.

Such examples and counter‑examples teem in the literature. One impressive specimen is the grue‑bleen paradox, introduced by Nelson Goodman in the 1950s: suppose someone produces four objects, say glass balls. Two look blue and two look green and they are not otherwise distinguishable. The cause of their seeming to be of one colour or another, is the way they interact with ordinary white light and the way that light that has passed through or reflected from the balls interacts with our eyes and nervous systems.

However, we happen to be in error in our assumption about the colours of some of these balls in particular: in reality, though one ball is blue, and one green, the other pair are one bleen, and one grue.

A bleen object is one that looks blue till the first day of the year 2100, and thereafter looks green; a grue object is one that looks green till the first day of the year 2100, and thereafter looks blue. Each of those glass balls had been inspected for centuries daily and never given any cause for anyone to see them as other than their apparent labelled colour. How can we tell which will change its apparent colour, except by waiting till the dawn of the 22nd century?

Equally, after the start of 2100, how can we tell which had changed colour? If we showed the balls to a bright young physicist born on the morning of the first day of the century, and when he was old enough, assured him that the apparently green one really was bleen, the resulting discussion might well be fraught with frustration.

One thing that we certainly could agree, whether we regard anything like that as plausible or not, is that if it could occur at all, it would upset our expectations of both causality and our algebra of physics — if it did not destroy them outright.

Another idea that would make nonsense of a standard view of consistent causality is the 19th century "Omphalos" hypothesis of Philip Henry Gosse: he pointed out, for reasons that have nothing to do with our topic here, that we have no way of knowing whether the world we see now was not created as a running concern, politicians and all, just a minute ago, complete with our mountains and geological strata, complete with light on its route through space, complete with our fossils, and the year rings in our trees and the memories in our brains, and histories in our books. If that were correct, then our impressions, both of our past and of causality in our world so far, however convincing when we examine them, would be illusory.

Again, suppose that a snooker player discovered that at the start of 2100, some of his balls started behaving peculiarly: some would change in density and elasticity. Some could change in inertia without any change in density; is that possible? Would it require any change in our algebra of physics? Think about that one...

Or yet again, our world could be totally chaotic, and our perception of even partial consistence, future or past, could accordingly be a delusion. If so, it is hard to see how even a delusion of consistence makes sense, or how anything matters at all, even temporarily. If, on the other hand, we do care, then that is why anything matters. As Bierce pointed out in the quotation in the epigraph: if it does not matter at all, then why should it matter whether it matters to us or anything else?

There are plenty of other examples of similar reasoning, some more charming than others, but, as well as we can, let's pass causally on for now: let us see howsofar Hume's criticism of the concept of cause is justified. We may at once grant his assertion that we depend on empirical inductive reasoning whenever we assume the reality of cause in the usual naïve sense: when we see:

  • that certain things keep happening in more or less the same way, and
  • we get a pretty good idea of why and how that happens in terms of a physical algebra, then :

we have some pretty good grounds to suspect that we have identified an example of what we reasonably might call causation.

As a matter of common sense however, we always need to remember how exposed we are to the risk of errors of various types. Remember the black swans: we seldom can tell whether we have examined enough examples, or examined the right population of examples, or are able to be sure that our sample is random or "fair", or in general non‑misleading.

What we really need, to justify any such categorical conclusion, is some sort of categorical implication of the nature of the set we are considering. And as a rule, in fact arguably invariably, such an implication is just what we cannot get; it might not even exist. Instead, we are reduced to concepts of induction, abduction, approximation, and probability.

Fortunately we have been able to achieve amazing results with such rickety tools lately.

Still, we cannot always tell whether we have examined our phenomena deeply enough; Newton's brilliant reasoning, abductive, inductive, and deductive, not only gave remarkable precision, but also gave us far greater power in dealing with our realities, than anything we had had before (and far and away, more than most things since). The tendency then was to assume that we had discovered what I would call a complete algebra of physics. When Einstein's work and the follow-up in relativity and quantum mechanics were developed, we found that, for some applications, we had been working on insufficiently precise and insufficiently sophisticated assumptions.

In a way, this is very like our erstwhile assumptions that Euclidean geometry was all there could be to geometry, whereas non-Euclidean geometries of various sorts hovered undiscovered all around us for thousands of years.

We do need to recognise the merit of Hume's view that formal reason alone cannot prove the reality of what we call efficient causality. However, he may have gone too far in appealing to custom and mental habit, observing that all human knowledge derives solely from experience — from several points of view a questionable idea at best.

The first thing to bear in mind is that Hume's criticism amounts to pointing out the failure of the causal hypotheses to meet the standards of formal proof.

In this he was quite correct from several points of view.

However, he never proved the contrary: that efficient causality was necessarily a delusion; let us grant that we cannot prove formally that there is such a thing as cause in the everyday sense — but that gives us no special reason to doubt the material reality of cause in general. The general power of abductive and inductive evidence strongly support cause.

Although, as far as I could tell, Hume never mentioned Occam's razor by name, he implied the same principle: that we should not multiply essences unnecessarily. For example, he said in part: "And tho' we must endeavour to render all our principles as universal as possible, by tracing up our experiments to the utmost, and explaining all effects from the simplest and fewest causes, 'tis still certain we cannot go beyond experience ..."

And yet that very principle could be applied in the opposite direction, cutting the hand that wields it, so to say. Anyway, no such a razor ever is proof in itself: it is a rule of thumb, not a formal justification, and a loose rule at that. And it leaves room to wonder whether all multiplications of essences are equally unacceptable: if, in the dark I hear what sounds like hoofbeats, am I better off assuming as essences: six black horses, three zebras, or one unicorn?

We cannot always tell which rival hypotheses multiply essences most economically. And if rival hypotheses notionally invoke equally many essences, they may be invoking different essences. And some essences, some concepts, are more believable, more powerful, or more productive, than others: which hypothesis is more valuable — the ancient assumption of four elements, or our current assumption of something like ninety, not counting synthetic elements?

Or, suppose we assume solipsistically that random essential concepts in our minds, or in my mind at least, create what seems to be a consistent, coherent causal world. Such creation could be argued to require a significantly larger multiplication of essences than to assume that what we seem to see necessarily reflects, within our capacity, something much like the actuality we fancy we see.

Or close enough for jazz anyway.

Yet, as I see it, even this is not Hume's fundamental false step. That lies deeper than denying our ability, by application of formal logic, to support our claims concerning cause. What was worse, was the assumption that in applied logic, we can claim formal proof at all. Even our formal proofs of formal theorems commonly are impure: we implement them by physical, mechanical, procedures, and by mechanisms such as brains and calculators, and by models, visual symbols and constructions. Even if we grant for argument's sake that they were valid yesterday, it is arbitrarily inductive to assume that they will be valid again today.

We also must recognise that some of those anti‑inductive fingers point both ways: to be sure, formal proof cannot formally prove general assertions in applied logic, but formal proof cannot prove formal proof either. Formal proof depends directly or indirectly on derivation of conclusions from arbitrary axioms; it is possible for workers on such proofs to make mistakes; possible for them to propose inappropriate axioms, possible for the axioms themselves to conceal inconsistencies that render whole classes of proof invalid, or at least insufficient. For that matter, Hume's own assertions could hide errors.

Again, our inability to prove that a particular apparently (or even obscurely) causal correlation really is causal, is no better formal proof that it really is non‑causal than inductive reasoning can prove formally that it is in fact causal.

In short, even in mathematics, let alone in science, formal proof is at best a very, very tricky field, as Charles Dodgson pointed out in his brilliant sketch: "What the tortoise said to Achilles".

Now what was it that established our conceptions of "rational" causality in the first place? It was a combination of empirical observation, with consistent patterns of events that made algebraic sense in terms of empirical observations and hypotheses. In our historic and classical past, we routinely invoked associations of events with "supernatural" causes such as those fabricated in superstitious belief in witchcraft or divinities.

It took millennia of time, and hosts of human sacrifices, tragedies, and false starts, for us to develop anything that begins to look like an empirically rational toolkit. Only then could we critically analyse and investigate causal hypotheses in anything like a commonsense manner, let alone rationally in the light of the results of scientific work, false starts, and progress. It was uphill work, because at every level we had to fight vested interests in quackery and delusion.

But, in the light of Hume's criticisms, in what way were such empirically rational tools any better than previous delusions?

Firstly they were no worse than Hume's arguments themselves: as I discussed earlier, Hume was applying his accusations to material systems, not formal systems, so that his axioms themselves really were no better founded than any other material assumptions. Reasoning about material systems implies empirical investigation and conclusion, and empirical work has little to do with formal proof. For one thing it is widely accepted, if not actually common cause, that empirical systems are underdetermined, so that ultimately one is limited to selection of hypotheses on a basis of principles that differ from formal proof, and assumptions that differ from formal axioms.

Curiously, given his rejection of naïve versions of "cause" Hume seems not to have thought in terms compatible with our current views on underdetermination; to the contrary, he said: " ...the same cause always produces the same effect, and the same effect never arises but from the same cause: this principle we derive from experience ..."

As I already have quoted Feynman speaking some two centuries after: “A philosopher once said, ‘It is necessary for the very existence of science that the same conditions always produce the same results’. Well, they don’t!”

To some extent we deal with underdetermination by empirically scientific disciplines, generally along the lines of hypothesis, prediction and verification/falsification. Such approaches intrinsically cannot banish underdetermination completely, so they cannot prove that our new ideas are correct, but they can eliminate large classes of invalid guesses.

In this way abduction, hypothesis, prediction, and associated expedients, have made the science of the last few centuries unprecedentedly successful in giving us power over our world.

In particular, empirical science does not even aim at formal proof, only at the selection and generation of hypotheses concerning empirical observation and the generation and comparison of rival hypotheses. Proofs of Hume's type are not relevant in physical science; physical science is a field in which working hypotheses change continually, although, as far as we can manage to achieve it, progressively.

As a matter of historical context, the recent record of the working hypotheses of science, though spattered with instances of errors, blunders, bad faith, and downright stupidity, has been one of success unprecedented in our entire human past, and in fact of continued success on an unprecedented scale during our most recent two to four centuries, depending on who is counting — and there is no end in sight; mainly just continuing acceleration.

Causal chains and webs

You can’t proceed from the informal to the formal by formal means.
Alan J. Perliss

First, before trying to discuss “cause” in the philosophical sense, let’s clear up some confusion that arises from simplistic assumptions. People speak of some one thing “causing” some other thing, and thereby they immediately introduce difficulties. Commonly however they fail to recognise the difficulties that they themselves have created by their unspoken assumptions.

No matter what people assume, whole chains and webs of events must necessarily happen first or in parallel for practically anything to happen at all.

I already have discussed some aspects of correlation and causation, under the heading: "Induction", but the topic has so many aspects to that it inevitably remains treacherous. For one thing, causation itself is a slippery concept, and full of intellectual traps.

Practically any of those items I discuss is the “cause” in the sense that, if we had prevented that item, something different would happen instead. We could say that what we did that "caused" that different event, became the “cause” of what happened instead. Sometimes it might be possible to rescue a change by adding other "causes", but generally at the cost of greater changes elsewhere. This reflects the view that we cannot destroy information.

We can illustrate such principles beautifully with cellular automata such as variations on the game of “Life”; detailed description would take too long for our immediate purposes here, but I cannot offhand think of any sufficiently large, finite, non‑trivial pattern in the game of Life, for which it is impossible to change its behaviour by addition or removal of a suitably chosen single cell, though it sometimes would be possible to change some cells without affecting the outcome. It certainly is possible to find an “Eden” pattern, one that cannot be “caused” in the sense of arising from any preceding pattern, so that it must be specified in its entirity from outside; but that is a different matter. If such concepts of cellular automata are unfamiliar to you, you might find it helpful to begin by reading "Conway's Game of Life" in Wikipedia.

So ignore cellular automata for now, and imagine a more familiar, proverbial example instead: a farrier's assistant loses his grip, causing his hammer to fall; the falling hammer distracts the farrier, causing the improper installation of a nail into a horseshoe; loss of the nail causes loss of the horseshoe, causes loss of the horse, causes loss of the rider, causes loss of the message, causes loss of the battle.

But what caused the loss of the nail? A generation before the falling of the hammer, the father of the assistant of the farrier met the assistant's mother‑to‑be by accident because her scarf blew away. If that had not happened the assistant never would have been born, and instead a different assistant farrier would have stood by, who probably would not have dropped the hammer at that exact moment and the shoe would not have been lost. So that gust of wind that blew the scarf perhaps twenty years earlier, was what really caused the loss of the battle, right?

And yet, any of an indefinite number of still other causes could have meant that there was a different rider, who lost the message without losing the horse, or lost the horseshoe without losing the horse. And furthermore, the non‑birth of that particular farrier's assistant would have an indefinite number of other consequences; for example, there might not have been a farrier or a battle in those exact places and at those times at all.

Or suppose that the hammer did fall and the horse was indeed lost, but the same accident sent another rider on an inferior horse after the fallen rider. He stops to see what had happened to his predecessor, and offers his horse in exchange for the lamed, though otherwise better, horse and saddle, because they will be worth more after care and healing. So the message does get through in time to have the crucial effect, and causes the general to win the battle. Now what won the battle?

As you can see, assigning a unique and independent cause to any event rarely is practical, if ever, and the same is true for the assignment of a unique and independent effect to any cause. An indefinite number of other "causes" of such types could have lost or won the battle. Another similarly trivial accident could have caused a stray shot to kill the general, or the opposing general, just before the message arrived. In my personal opinion, it is flatly impossible for any noticeable event in real life to have only one cause, or for any one event to cause only one effect. For a simple causal event one would have to resort to a notional universe containing a very small number of particles. That is not us, and certainly not on a planet like Earth.

One could of course do some finicky redefinition of what a cause is, dismissing indirect causes: the dropped hammer only caused the noise, and the noise, not the dropped hammer, was what distracted the farrier; so a musket fired behind the fence could have caused a similar defect in the horseshoe — and so on.

We usually expect big effects to have big causes, but that is not always the case.

Examined in such terms the very concept of cause tends to fall apart. Remember Aesop’s fable of the mountain that laboured, only to bring forth a mouse? Analogously, we seldom think in terms of something large specifically causing something small; as a rule we are correct; something large might produce many small things — a major earthquake might destroy many tonnes of eggs in many places — or it might produce something large: a quake might collapse a mountain to destroy a city — or a mouse.

But a big quake causing nothing more significant than a broken egg?  I cannot imagine that.

Ambrose Bierce, with characteristically perverse penetration, wrote the following amended fable on that theme:

A Mountain was in labour, and the people of seven cities had assembled to watch its movements and hear its groans. While they waited in breathless expectancy out came a Mouse.
"Oh, what a baby!" they cried in derision.
"I may be a baby," said the Mouse, gravely, as he passed outward through the forest of shins, "but I know tolerably well how to diagnose a volcano."

What caused what? At a pinch, we might imagine a combination of minor events causing a big event, but as a rule a big event either needs a big event to cause it, or a potentially big event to be triggered to cause it. For example, a shout can start an avalanche, but only if the snow had built up in advance. And pushing the snow back up would not unshout the shout.

Call that the trigger effect — and do not expect unpulling the trigger to undo the effect.

Some legal systems recognise associated difficulties in ascribing the operative cause among a combination of causes: two pedestrians collide and one of them staggers into the street; a passing car knocks him down and passes over him before the driver can stop. The car immediately behind doesn’t see him in time either and also goes over. The pedestrian is found to be dead. Was the fatal injury caused by the first car or the second? How does one allocate the blame? First car or second car, or the victim himself, or the other pedestrian in the collision? Or all? Or none? Various legal systems would differ in their interpretations and actions. But for each one, the resolution of the problem would be an arbitrary compromise at best.

For an example of an uncompromising resolution, consider certain aboriginal groups in Africa at least, in which, I understand, every death was asserted to have been caused by witchcraft, and the culprit had to be identified by the local witchdoctor, with varying penalties.

In his book: “Demon Haunted World”, Carl Sagan pointed out that we should beware of too smug a view of such naïve savagery: witch hunts in our mediaeval world were hardly less uncivilised or unintelligent; many a victim of witch-hunts was burnt or hanged on no better evidence than that someone’s cow had died. After all, why else should it have happened?

Not all "civilised" legal systems are clean of such taints; compromise has its points — think of say, the abuses that belatedly led to "good Samaritan" laws in some countries.

For a more complex scenario, try this: Alph, Billy, Charlie, and Dennis crashed unhurt in an aircraft routed over the desert, landing among remote dunes.

Alph so bullied the others that each privately decided not to hold out for rescue while Alph lived.

Billy surreptitiously put a slow leak into Alph's water canteen. Charlie did not know of the leak, so he dissolved some cyanide in the water. Dennis did not know of the others’ attempts, so he dissolved some gelatine in the water while the others slept. This solidified the water once it had cooled, so that it could not leak, but also could not be poured out, giving the impression of a vessel containing no liquid, though if it were dug out with a suitable rod and the jelly were swallowed, it could have kept Alph alive just as effectively as liquid water would have, if it had not been for the cyanide.

Alph set out to walk towards the river, telling the others to wait by the wreck, or he would skin them when he returned, but as soon as he had gone they hurriedly left in another direction, taking the rest of the water from the wreck and abandoning him to his fate. Each of them, for his own reasons, expected Alph never to return, but each guiltily refrained from telling their mates what they had done. Alph was within walking distance of the river if only he had had enough drinking water in the canteen to get him so far, but not knowing of the cyanide or the gelatine, he died of thirst before he could reach the river, or reach the water that would have been in the wreck if the others had not removed it.

One way or another the story came out, and Billy, Charlie, and Dennis were charged with Alph's murder. Each pleaded not guilty, though some reserved pleas of attempted murder, or arguing that nobody had done anything that killed Alph, so that in fact there was no murder at all.

Billy argued that his leak had done Alph no harm because it was not the leak that had deprived him of water, and in fact he had had enough water with him all the time, and that his leak would in fact have saved him from the cyanide that would have killed him if it had not been for the gelatine.

Alph had not died of cyanide, or even known of its presence in the water, and Charley had done nothing but administer cyanide, so he could not possibly have been guilty of harming Alph in any way.

Dennis argued that that the jellied water could have kept Alph alive till he reached the river, so that actually his jelly had conserved the water from leaking out, prolonging Alph's life from a quicker death from the cyanide that Dennis had not known about anyway — if only Alph had elected to be saved by it. Alph had died of his own ignorance, not murder by cyanide or absence of water.

And yet, though Alph could not have survived any of those attempts on his life individually, he had not died of them collectively, but of unrelated thirst.

Still — only if the bungling killers all had refrained, could he have survived. But no individual attempt had succeeded more than any other and there had been neither collusion nor conspiracy. And what about the plea that some of the actions had prolonged Alph’s life, however briefly? Ultimately, which of the four were guilty of what, if of anything? Alph of suicide, or any of the others?

Arguably the moral thing to do would be to arrest the desert as the murderer. Or the mother of the mechanic whose negligence had caused the plane's failure: if she had not had the affair with the mechanic's biological father ...

To assign guilt of murder, you need to identify deliberate actions that had caused the death directly. For a would‑be murderer simply to be happy to celebrate the death, does not comprise murder, nor even attempted murder.

Now, there are several aspects to making sense of dilemmas of those various types. One such aspect is the difficulty of defining what a direct cause might be, given the problem of the defining the string of causes within the web of causes. Another is working out what it is about one event or circumstance that from the viewpoint of a particular observer makes such an event amount to a cause in our everyday sense, and a third is how we can define a cause in our sense of scientific empirical investigation.

The first of those problems of definition is partly psychological and practical, given that the relationships are clear and understood. Our view of any situation is limited because we can only see or predict a small number of factors as relevant, which is why we cannot say how many things could reasonably affect any outcome.

So we see a particular action or event as a simple cause of a particular event: “I pulled the alarm cord to prevent the train smash.” or “His leaky fuel‑tank caused the forest fire that killed 97 people.” Or: “She stuck pins into the wax model, and that was the cause of his dying of prostate cancer.” And so on.

Such examples, that might not even be accurate, let alone reliable, deal with poorly‑defined events that affected multiple outcomes of poorly defined circumstances, most of which weren’t even suspected by the witnesses.

It is not that the events were not real, nor necessarily that the concepts of the causal connections were not real, but that they were poorly understood, and their causal connections were hardly ever more than part of an indefinitely complex web of causal interrelationships and confounding factors. And generally the outcomes were grossly unrecognised or minor effects in a major complex.

For example, if one of those pins in the wax model had pierced a finger, that might have been the unrecognised cause of the death of the spell weaver by septicaemia; the cancer might have had nothing to do with the pin, or even with the murder victim's death at all; unless there was an autopsy possibly no one would have suspected any possibility of cancer. Or instead he might have died of a stroke when he was told that someone was sticking pins into a model of him.

Things become far more troublesome in empirical science, in which we teach the young researcher to use controls. The idea of the control is to reduce the number of differences between the cause of the effect under investigation, and any alternative. In particular we wish to reduce that difference to a single variable, or a coherent combination of variables. If we cannot reduce it in such a way, then we need more controls. The intention is to be sure that we are correct in asserting that the variables are what cause the predicted effect.

That is of course a hopeless oversimplification of experimental theory, but I am not trying to cover the entire field, so please be tolerant.

Unfortunately, even as it stands, that is not sufficient. There are so many variables that there are many ways of producing wrong predictions or even nonsensical interpretations. One of my personal favourites is the schoolbook demonstration of the lighted candle standing in the basin of water. Invert a glass vessel over the candle and lower it to stand over the candle in the water. When the flame goes out, the surrounding water gets sucked up into the cylinder, proving that the oxygen absorbed by the candle flame had reduced the volume of the atmosphere in the cylinder.

That experiment, though charming to school children, is radically misleading. The oxygen consumed in combustion in the cylinder has hardly anything to do with the gas volume and pressure, because it is largely replaced by the same volume of carbon dioxide, but that uncontrolled experiment has remained a favourite for generations. The experiment ignores the heating of the air in the glass by the flame, followed by its cooling when the flame is extinguished; and it ignores the reduction of the volume of carbon dioxide by its solution in the water. That experiment could be put to better use as an example of the various misleading assumptions that one may encounter in science, and of why notionally strict controls are not generally possible and of why experimentation theory can become horrendously complex, demanding other means of analysis, such as sophisticated statistics.

For the least arguable idea of cause, one might confine the discussion to a single event or situation, such that it reflects a given physical principle, say Newton's assertion that F=ma, that is to say: Force equals Mass times Acceleration. So, we could refer to that principle in considering say, the causal event of a club hitting a golf ball. We could apply a few concepts such as the ball's trajectory and derive the force that the club head exerted on the ball. The question of the force that the golfer's arms applied to the club we then would regard as a separate and different causal event, and only of indirect relevance to the stroke of the ball.

But all the causal events leading up to the ball's trajectory would amount to a causal chain.

And all the events that affected the causal chain indirectly, so‑called confounding events, such as whether the golfer was distracted by his caddy's munching on an apple, would be parts of a causal web of events.

But we would not generally undertake to demonstrate the whole causal chain or web, only the last, most local, very few events or states.

Philosophically this approach greatly simplifies the concept of cause: to the extent that we could estimate or calculate the various items in the causal web, we can derive the outcome of interest meaningfully.

However, there are important principles involved in this view. We find that:

  • we can accept the concept of webs of observed or predicted events being causal; we need not assert it as being inarguable, let alone deterministic.
  • differences in parameters such as forces or coordinates cause differences in outcomes.
  • such differences commonly have quantitative aspects: big differences tend to cause bigger changes in outcomes sooner than little differences do.
  • the natures of such differences in the parameters amount to whatever information affects the outcomes.
  • in practice the information available to any observer, sentient or otherwise, usually is a small fraction of the physical forces that materially lead to the outcome of any event in a causal web.
  • in theory a great deal of variation is flatly unavoidable because quantum events making up reality involve intrinsic unpredictability; this implies non‑determinism because in the nature of things some of the information necessary to determine outcomes intrinsically never could exist at all, whether it would have been available to any observer or not.
  • even without quantum unpredictability, as I show elsewhere, limits to the information existing in classical physical systems, let alone to information accessible to measurement, imply limits to the precision of physical determination. This applies both in Newtonian and Einsteinian physics.
  • such effects do not forbid causality, but they inescapably affect such things as determinism, the creation of entropy, and the arrow of time.

Now, back to correlation and causation.

We must first bear in mind the foregoing limitations on the concept of causation in any sense. Having done that, if we find that a given preceding event occurs frequently, even invariably before the consequent event, then we need to consider it strongly as being causal. That alone is not proof; it also could be common cause. Say I buy some eggs every Thursday at the same shop as Ken, whom I happen not to know, and he buys some an hour later, and this happens repeatedly. One observer might conclude that I cause Ken's purchase. However, the primary cause is simply that we both buy on Thursdays because we both know independently, that that is when fresh eggs are delivered to the shop.

We might not even know of each other's existence, but notionally a G‑E‑V could see the correlation and understand its cause, a cause that would not have much to do with one causing the other. It does not follow of course, that in a case such as this one, there was no mutual cause, or at least causal influence: for example, by behaving as regular customers, we might be encouraging the shop to stock fresh eggs on Thursdays.

At the same time, there could be all sorts of contributory minor causes that Ken and I were hardly aware of: our respective reasons for wanting eggs at all, fresh in particular. But our proximate reason would be the freshness of the Thursday eggs: one common cause for two independent effects.

Not all causes have multiple effects in that form; notionally some could simply be caused in a simple one-to-one sequence. This is strictly true only in principle, because it is arguably impossible for any material cause to have just one effect, but, to get the idea, imagine a series of toppling dominoes, each knocking the next one. Here we deal with what we might call transitive causation:

  • If domino A falls it knocks down domino B
  • If domino B falls it knocks down domino C
  • Therefore, transitively, knocking down A causes knocking down of C, and indeed knocking down of any following dominoes.
  • In the latter case, transitively knocking down B, and any other intervening dominoes, is a collateral effect, whether desired or not.
  • Though they are not generally taken into account in considering the transitive knocking down, there always will be other collateral effects, such as the noise made by the falling dominoes. And it is possible for one domino to knock down two dominoes, causing the line of fallen dominoes to split into two chains of falling dominoes.

Yet again, some effects intrinsically require multiple simultaneous contributory causes. Think of the string of a musical instrument: to be of use it needs to be under equal tension from two ends at once. Pulling on just one will not do.

Those three types of component causes: linear, propagating, and combining, are the major contributors to the structures of what I call causal webs. I do not develop the theme much here, but the concepts seem to be of increasing importance in fields such as decision theory.

 

The Emergence of Emergence

We know very little, and yet it is astonishing that we know so much,
and still more astonishing that so little knowledge
can give us so much power.
Bertrand Russell

This might sound like a prudent time to get out of here, but really, it is just the beginning. One idea to deal with straight away is the very concept of entity. Another one is emergence.

Here again I exceed my own imaginative capacity. Whether “entity” had any meaning in the universe without that first particle, or even whether an empty universe necessarily contains itself in itself as an entity, I cannot say. I cannot even say whether it makes sense at all to speak of entities in a universe where there is exactly one entity, assuming that such a thing might be possible in principle. And even that is on the assumption that we have some sort of magic G‑E‑V that enables us to observe without defining any location or having any other observer effect that messes things up.

Pure magic again, of course. Until someone can demonstrate some such thing in principle at least, I do not believe in any real form of G‑E‑V. In practice as opposed to principle, I flatly disbelieve it.

There are confusing rules about speaking of different electrons in a universe where electrons are all identical as far as we can tell, but there are two kinds of plain vanilla electrons: negative electrons and positive (that is to say: positrons), and their charges and the fields of those charges are consistent enough to label them for our purposes, and their spin is a story in itself. Already we can recognise each of the two electrons as an entity, and we also can recognise the two electrons as a pair; from that very fact, that pair is an entity too. Aspects of their positions and trajectories already can be discussed as entities.

Entities essentially imply information. Exactly how that information can be conserved in a universe with only an indefinite space plus two electrons, I am unsure (much less with a single electron), so I will not follow that line of thought, but I suspect that just by starting with a universe with at least two particles, necessarily with different coordinates, we are breeding space at a huge rate, something to do with c, the speed of light. And I bet that in the process we are breeding energy and particles as well.

It seems to me that I am describing something like the Big Bang, which notionally had emerged from some sort of point‑like event.

But I am not sure of that either. And it is another subject, not vital to my theme, as far as I can tell, so let’s leave it there.

The main thing at this stage is that by adding a second particle we have created more than just two of one kind of particle in a universe. Things can happen in that new universe, things that could not happen in what previously was a less complex universe.

As soon as we have at least two entities, we get what I call emergent effects. For one thing, we get emergence; and besides that effect of emergence, a new kind of entity now exists: a pair of electrons. There will be much more to say later about emergence. You need not bother to interrupt to tell me that what I say contradicts many prominent philosophers who lay down the law about emergence; they contradict each other too, and there is plenty more that they are catastrophically wrong about as well, and I don't have the time to nursemaid them in everything.

Mind you, I myself am not sure about everything; for example: it strikes me that one might argue that even before adding the second electron, adding the first particle already produced emergent effects that were not possible before. But I do not immediately insist upon that. I still am unsure about what space itself might be, or what all its implications are.

Anyway, the effects that those two electrons have on each other may be slight (but how much is one to expect from a universe of just two particles anyway?) and it might be hard to put a finger on some of those effects, but if we add a third particle to our universe, then yet more, completely new, effects and entities emerge.

For one thing, the presence of three point-like particles that are not necessarily on the same straight line, introduces the concept of a plane as well as a line. And they bring the n‑body problem into existence, and an n‑body system is as far beyond the 2‑body system as having a second particle is beyond a single‑body universe.

Arguably it is vastly further beyond.

It is not clear to me that a three‑particle system can define space and matter as we experience them, more realistically than a two‑dimensional universe. So it might be good to imagine a fourth particle as well. And suddenly we have a new emergent concept: three‑dimensional space as well as line and plane. I am of course omitting concepts such as Riemannian spaces and the like, because I cannot see that they are as yet of much relevance in the present limited context.

 

Nothing with no time, and no time with nothing

"The geometry, for instance, they taught you at school is founded on a misconception."
"Is not that rather a large thing to expect us to begin upon?" said Filby,
an argumentative person with red hair.
"...You know of course that a mathematical line, a line of thickness NIL,
has no real existence... Neither has a mathematical plane. These things are mere abstractions...
Nor, having only length, breadth, and thickness, can a cube have a real existence."
"There I object," said Filby. "Of course a solid body may exist. All real things —"
"So most people think. But wait a moment. Can an INSTANTANEOUS cube exist?"
"Don't follow you," said Filby.
"Can a cube that does not last for any time at all, have a real existence?"
Filby became pensive.

H.G.Wells.   The Time Machine

 

 Time replies on "Killing Time"
There's scarce a point whereon mankind agree
So well as in their boast of killing me;
I boast of nothing, but when I've a mind
I think I can be even with mankind.
Voltaire


 

In discussing any of these concepts of emergence, I have not been counting time as a dimension; this is not because I deny time the status of a dimension, but because I do not feel up to that topic here. Inclusion of time in the system would entail the concept of world lines (or time lines if you like). That is an interesting field in itself; in his brilliant novelette, “The Time Machine”, H.G. Wells anticipated world lines by introducing the concept of “an instantaneous cube”, and I commend it to your attention as one example of how far ahead of his time some of his ideas really were.

But let’s steer clear of such concepts for now, and take the time dimension for granted. If you omit the time dimension, then things get messy! We need time to bind all sorts of concepts together. I suspect that time too, is emergent from the existence of particles, or perhaps fields, but I cannot support that strongly, even as a concept, let alone a conjecture.

Some schools deny time except as a delusion, because most of the basic physical descriptions of events work without taking time into account, and most of the rest work equally well if you reverse the direction of the flow of time. However, there are concepts and classes of event that do inescapably work in one direction only, so, whatever time is and however it works or behaves, I do accept its existence. More on that later.

But this line of thought and its implications do not stop there. As I write, our model of three or four bodies by now comprises a few-body system, where "few" could have many meanings, but certainly would mean more than two bodies.

If not all our particles are the same particle, nor of the same particle type, but there are say, of the order of a handful or a score of different types, and they can interact by interrelationships in different ways, then we commonly find complex structures emerging. Just as we can build cities from buildings (architecturally, anyway — real cities need humans and other components as well) and we can build buildings from bricks, and bricks from mineral particles, so quarks and gluons and the like, can compose hadrons, and hadrons and leptons can compose atoms, and atoms can compose molecules.

And molecules can compose the mineral particles that compose bricks.

And if there are enough particles for gravity to overcome their separate momenta, the particles can combine into planets.

Or stars.

Or solar systems, nebulae, or galaxies.

And all those things are entities, as I defined entities earlier.

In most cases entities imply perceivers in various roles, but the perceiver need not be a conscious entity, any more than the tree falling in the forest needs a Berkeley or a god to perceive the fall or its consequences, nor than a supernova needs an astronomer a million light years away to observe it a million years after its explosion.

This denial of time is peculiar, not least because it is implicit in one of the most fundamental of Newton's equations: F=ma; force equals mass times acceleration, or equivalently, a=F/m. This tells us how to calculate acceleration, but acceleration itself is defined in terms of time and change of velocity.

Time then, can be expressed in suitable units as:

where S stands for time (seconds, if you like) and a and d for acceleration and distance. The details don't matter: the point is that such basic concepts in physics define time directly whether in Newtonian or Einsteinian terms.

So we don't have to go far to find justification for time as a physical concept. The fact that in many concepts we can eliminate the time term is irrelevant; what matters is not that in some relationships time disappears, but that in some basic relationships time appears directly.

Another point is that the very fact that c, the speed of light in vacuum, is taken as arguably the most fundamental constant in physics, means that time, as determined in terms of the distance that light travels in determining the measurement, can be defined unambiguously.

Killing time is a crime.

 

Media, Messages, Observations, & a Hint of Semiotics

But words are things, and a small drop of ink,
Falling like dew, upon a thought, produces
That which makes thousands, perhaps millions, think;
'T is strange, the shortest letter which man uses
Instead of speech, may form a lasting link
Of ages; to what straits old Time reduces
Frail man, when paper — even a rag like this,
Survives himself, his tomb, and all that 's his.
Lord Byron.  Don Juan

 Though radio signals from a transmitter consist of photons, the transmitter is not made of photons: it is of metals and other components. And the stone from a catapult is not a catapult. Such entities, such systems, differ in nature from what they emit, and differ from the behaviour of the humans that receive the stone many metres away, or the signal from possibly thousands of kilometres away.

The photons themselves are not the message either: the same photons could convey an arbitrarily different message, much as the ink on paper that expresses the private affection of a love letter, could equally convey a declaration of a war that kills millions. The effect of any such message might take the form of behaviour, say laughter, lunching, or launching a missile, and each of those effects is emergent in that none is a component of a source or emitter or radio set or a catapult or ink, nor is it even comprised of such things.

The cliché that the medium is not the message, is largely unexceptionable, though not necessarily universally true in all respects, but the medium itself is not necessarily the materials composing it, nor limited to a conveying only one single message, nor is the message limited to one medium. Different forms of emergence might produce different emergents, or produce functionally identical, emergents in indefinitely different ways and contexts.

This has important implications for the nature of our universe, in that in a huge range of contexts, it is the basis for the concept of underdetermination.

So a lot of that dismissal of the concept of the nature of emergence arose from differences in the assumptions on which the definitions were based. And a lot of the assumptions were arbitrary, inconsistent, incoherent, unjustified, or flatly wrong. One argument against the concept of emergence is that it adds nothing to ordinary physical cause and effect. That however is breathtakingly unperceptive. "Ordinary physical cause and effect" transcend ordinary comprehension and imagination in so many ways that it leaves one, if not actually speechless, at least inadequately coherent. One might as well argue that art, architecture, science and technology are indistinguishable form the components that go to make up painting daubs and masterpieces, lumps and sculptures, rockpiles and hovels and palaces, illiterate scribbles and sublime literature, dogma from living science, and shards from atomic force microscopes.

More not only is different, but is different in so many different ways as to beggar my imagination at least ...

And such differences are the inescapable and essential basis of emergence from "ordinary physical cause and effect" by the universal interrelationships between entities.

Entities in emergence and reductionism

We realise thus that:
. . . . big whirls have little whirls that feed on their velocity,. . . .
. . . . and little whirls have lesser whirls and so on to viscosity
in the molecular sense.
. . . . . . . .  Lewis Fry Richardson

 

So far we have been discussing the types of emergence that result from simple introduction of additional entities. And we encounter emergent effects or emergent entities wherever we put entities together; furthermore, we commonly get different emergent effects and entities if we put them together differently. Introducing more entities than there were before, or conversely, introducing entities where we separate entities from each other, whether physically or conceptually, intrinsically introduces emergence, or if you like, emergent entities.

For a familiar example consider the story of the old man who walks into the barber shop: "Haircut and styling please!"

"Certainly sir; please sit down and let me take your hat."

On removal of the hat, it emerges that the old man has just three hairs.

In a triangle on top of his head.

Each hair standing erect and separate.

"Errm, how do you want them cut, sir?"

"Flat top! All the same length!"

Snip!

"How's that Sir?"

"Fine. Now comb them; path on the left"

The barber gingerly applies the comb between two hairs, and Oh! One hair pops off the scalp. "Sorry, sir! It was very loose"

"Never mind; better part it in the middle."

Another attempt. Another hair lost.

"And how do you want me to comb it now, sir?"

"Oh, never mind combing it; just leave it all tousled."

That was an example of emergence in reverse. At each loss, scope for other classes of possible relationships either emerges or vanishes. With less than three hairs, parting on the left is hardly meaningful — indistinguishable from parting in the middle; with less than two hairs, parting is hardly meaningful at all. With just a single hair or less, tousled hair is hardly meaningful either. So we get the emergence of a class of hairstyles that cannot tousle.

Or vice versa.

Emergent effects, as I use and define the term are precisely those that we could not get without one or more of the following:

  • putting entities together or separating them, or considering them together or separately
  • in general changing the interrelationships of entities in ways that produce effects other than the effects to be expected from the entities in isolation, or
  • changing the numbers of entities in any relationship.

What is more, we commonly get differently emergent effects by combining or splitting or rearranging identical sets of entities in different ways. In short the concept of emergence also deals with aspects of:

  • the mutual relationships of entities and their combined effects in those relationships, and
  • the information that determines the relationships between the component entities; for example new entities may emerge without addition or subtraction of a single elementary component, just from rearrangement of component entities that are themselves all emergent entities. One illustration might be the J. M. Barrie play ("The Admirable Crichton") in which different community structures emerge, change, and re-emerge successively, while retaining the same community members.

Nor, in general, conversely, could we make such changes without emergent effects.

It is important to recognise that emergent effects are themselves entities. A river is more than just a lot of water molecules plus a channel in bedrock; freeze that water to make a glacier, and it changes many things, including the form of that channel in the bedrock. A glacier channel is not a river channel, and once the water and ice are gone, the two dry channels are easy to tell apart.

And a galley is more than just a lot of planks fastened together.

These are nearly fundamental concepts; some might be primitives for all I know.

If you think that I am over‑stating the case, reducing the concept to meaninglessness through over-application, then think carefully about this: the very concept of thermodynamics depends on emergent effects — without large numbers of particles, generally gigantic numbers, such things as entropy and temperature have little meaning at best; the concepts become degenerate, like the arithmetic sign of sin(1/X) as X approaches 0.

Or like parting the two hairs on the left, perhaps?

Another bee in the bonnet of many of the early writers on emergence, was that to be emergent, an effect must be unexpected, in fact unpredictable in principle. That idea however, is incoherent to the point of meaninglessness; for one thing it introduces the concept of predictability, and does so without considering context in its definition. Either it takes the observer into account in defining the phenomenon, whereas the observer commonly is extrinsic to the phenomenon, or it assumes that the unpredictability is part of the phenomenon, which would imply that the phenomenon is non-causal, which it clearly need not be. For example, asserting the impossibility of predicting oceans from a study of the water molecule, is a favourite onset of supporters of that assertion, but we find that whenever enough water molecules get together in a basin large enough, we get an ocean.

Plainly the molecules know well enough in advance how to make an ocean, even if the philosophers find the prediction beyond them; such emergent outcomes are causal.

And the same is true for most other emergent effects, such as crowds, clouds, crystals, and nuclear explosions; the details differ, but the generic patterns remain true to form, even though the details vary more or less randomly according to type.

If the participating elements can work it out, it plainly is not impossible in principle to work it out, even if it is not humanly feasible.

When quantum randomness plays a part, then that affects matters of determinism, but patterns of causality remain in force.

From such points of view we find ourselves steered into concepts of semiotics. Whether we find it easier to regard semiotics as a branch of physicalism or the other way round, physicalism as a branch of semiotics, might be regarded as a matter of perspective, but either way, the connection is unavoidably intimate.

Here I am not concerned with the versions of semiotics that have become popular topics, often superficial or vacuous, in art, literature, sociology, and similar soft fields. I deal with the basic concepts of physicalism, causality, information, syntax, semantics, and other intimately interrelated topics.

For example, a single A4 sheet of blank paper plus some milligrams of ink from a pen, offer scope for wide ranges of emergent effects. The ink might be distributed at random, or in no readable language or recognisable picture, with no effect other than a waste of ink and paper. But that same ink could be distributed onto the paper in such a pattern as to express an indefinitely vital message. Or it could produce distinct, different, even opposite, effects if the same ink is rearranged to express different messages or pictures. Depending on the ink's distribution, alternative messages could present more information, or less, but no message or picture would be possible without the paper and ink, nor, as a rule, without the pen and the writer and the readers.

 As for what the messages might be, that is itself open to definition and interpretation. Consider these examples:

These three patterns all have the same number of visible characters and spaces and "ink"; do they all convey the same meaning? Do they all embody the same amount of information?




The next three also have the same amount of "ink" in their characters on about the same amount of "paper" and their characters are in the same arrangement as in the former example; do they all convey the same meaning? Do they all embody the same amount of information?

 



The main point remains constant: information embodied in mutual relationships between entities includes aspects that could never have existed without those mutual relationships.

That is the essence of emergence as I see it and as I use the concept in this essay.

By this definition it sounds stupidly simple, but, as I shall show, and as the previous arrangements of characters hint, in practice the concept is potentially almost indefinitely complex, and in principle very, very important, especially in identification of entities. Hardly anything in material existence happens without emergence of many sorts. Emergence may be seen as a class or aspect of consequence or implication in causality, yet emergence is not generally deterministic, nor yet fully determined in its consequence or implication.

In the course of writing this document I necessarily did some peripheral reading, and found in particular a great deal of argument about what emergence is or is not. The poverty, confusion and question‑begging of much of such discussion was so great that the very term “emergence” became widely derided. Some authors still dismiss the concept as meaningless or at best useless.

Such authors grossly misunderstand and misrepresent concepts of emergence from information and entropy. What emerges cannot come about without the subvenient, and sometimes supervenient, relationships; therefore the class of emergent effects is inescapably real and important.

As you may see, a large part of what I have come up with amounts to an independent formulation of a version of physicalism: its metaphysical basis is that everything is physical or depends on ("supervenes on") the physical; however, I regard even that very dependence as physical, so to mention it separately might be seen as redundant.

Notice that this version of physicalism neither asserts nor denies that emergence from a physical system implies that what emerges is of the same nature as the generating system. I reject, as unjustified and incoherent, the demand that the product of the emergence must be "novel". I reject too, as an incoherent and unnecessary denial of a negative, the view that for anything to be classed as newly emergent, it must be unpredictable by any analysis focused exclusively upon on the component parts. Some aspects of some examples might or might not be thus predictable other than in retrospect, but such unpredictability neither need apply to all emergence, nor need it be fundamental to the general concept of emergence.

For one thing, that idea fails to distinguish between predictability to the viewer, and predictability to the system itself. Perhaps a human without foreknowledge could not be expected to deduce and predict the various emergent aspects of water molecules that lead to the emergence of oceans, but the molecules seem to have no such constraints; they simply do what water molecules do, which amounts to calculating the nature of oceans on the fly. The same applies to any computationally challenging examples of emergence. The algebra of physics is  not much given to compromise.

Another claim, made by some very prominent parties, is that there is no such thing as emergence at all, because everything can be represented as interactions between quantum mechanical particles. This however is about as sensible as saying that a pylon is no different from a cable because both may be made of steel, so that if you know enough about rigid steel pylons, you accordingly know all about flexible steel cables.

Those same steels are made from largely similar molecules of similar atoms made from hadrons and leptons that comprise all our elements; to claim that therefore there is no difference between them might have merit in a neutron star's quark soup, but not on the surface of our planet, and the fact that at the quantum level all of them are products of the same kinds of causal events, while true, does not affect the concept that, not only is it true that "More is Different", but that "different arrangements are different".

Or on similar assumptions, you might argue that because poetry and other art and literature cannot exist without the appropriate media in appropriate relationships, and all such media are comprised of the same ultimate particles and quanta in their appropriate relationships in turn, therefore we know all about art if we understand quantum mechanics. By similar reasoning, a bold quantum theorist could undertake to play unbeatable go or chess, or pentominoes, or tennis, on the basis of quantum mechanical realities.

That simply is not how it works.

Many principles are involved here, but one of the most salient is that it would be a fallacy of composition to assume that because the ultimate components of an entity are elementary, the entity too, is elementary and consists in nothing but simple components. Not only can one see complex structures built of simple, or notionally simple, items, such as bricks, Lego blocks, or atoms, instead of leptons and quarks, but one also can see complex sub‑assemblies such as prefabricated walls, windows, or engines, built into yet more complex entities such as factories in which one performs operations different from the operations that are appropriate to assembling the factories.

And as for breaking eggs to make omelettes, or, if one is a hen, assembling eggs to make chicks .... Try mentally tracing the nature of the operations back to the original atoms, and the atoms back to the QM elementary particles. One repeatedly needs to change mode at different levels throughout the hierarchy.

Many examples are possible at many levels. If one builds a house of bricks, the bricks in the structure are not emergent from the house; the “house‑ness” of the house is emergent. In fact, we could build functionally identical houses from widely different bricks. As long as the bricks do not undergo changes in the building process (say by being fused together) they remain bricks, but the house still remains the house, irrespective of the nature of the bricks or other components as long as they are adequate. And there are all sorts of considerations if one replaces, adds, moves or removes say, just one brick: the house‑ness is hardly affected, but the house itself is no longer all the same in every possible sense.

 

Putting together timelines to please Theseus

Two years research can often save you ten minutes in a library
Anonymous

Similar considerations arise from the parable of the Ship of Theseus. For clarity I retail it here in my own version: Theseus commissioned a ship to take him and his companions on the expedition to deal with the Minotaur. On taking delivery, he took the ship for a trial run. His verdict was that the ship was fine, except for one of the trunnels that needed replacing. The shipwright privately thought it was nonsense, but it is ill arguing with formidable, short-tempered princes, so he extracted the trunnel and put in an identical one. At this point no reasonable person would deny that the ship had been changed just a little, or, contrariwise, argue that therefore the ship was not the same ship as ever.

After another trial, Theseus wanted a spar replaced. The shipwright prudently replaced it rather than annoy the prince, and the process was repeated on one component after another till not a single component remained that had been part of the original ship.

Now is it still the same ship?

If not, when did it stop being the same ship? When a plank was replaced that marked the point upon which more than half the original ship material was no longer the same, perhaps? Are we then to say that that plank made the difference between one ship and another? That plank looked just like most of the others, including in particular the plank that it had replaced, so that sounds like special pleading. Once again, any reasonable person would say that there never had been a point at which the ship stopped being the same ship; all the replacements were just like those that might have been replaced in routine maintenance; we don't find ourselves with a new ship every time a rope abrades a few microns of wood off a block.

Well then it would follow that the ship is unchanged. It looked no different and at no point stopped being the same ship.

Irrespective of its components, its shipness had at no point changed.

That ship was an emergent entity, and remained the same emergent entity until there might come a definitive change, say, if someone erases the name (say “Argo”? Tradition is not clear on the point) from the stern, and writes a new name, say “Ariadne”. Whether that makes it a new ship or not is a matter of preference in context; some ships are repeatedly renamed for one reason or another. In fact, the name might not have been written on the ship at all; in the pre-classical past not all ship names were written physically on the vessel at all, in which case any change in name would have been only in the minds and mouths of the people; it would not directly have changed the ship at all.

Meanwhile, as Theseus finally sailed off, the disgruntled shipwright reflected that there really had been nothing wrong with any of the parts, so, as he had stored them, he reassembled them exactly as they had been, economically getting the exact ship he had started out with, with the same materials in the same places as before.

Which is not strictly true of the vessel in which Theseus had left.

But before the reassembly, no one would have called the pile of parts on shore a ship at all: its material components were unchanged, but its shipness had vanished; its emergent nature would have been a pile of scrap, not a ship.

And at no time had the replacement ship stopped being the same ship. At no point had it forfeited the relationships that had constituted its shipness. In fact, not one of the changes would have been different from changes that could have been applied during routine maintenance.

Suppose the shipwright had surreptitiously marked each component of the original ship with an inconspicuous identification, and put an equally inconspicuous distinguishing mark on the replacement part as he installed it; then for anyone properly informed, but a newcomer to the scene, it would be possible to identify the reassembled ship as the first ship and the second one as a different ship. But any observer who had recorded the whole history of the process, would have said that the ship that had sailed off was the same ship as in the first place, and the reassembled parts now emerged as a new ship.

Suppose again, that after a while the alpha ship, the one that had originally been assembled, and the parts of which, Theseus had dismissively left behind, was no longer wanted as a ship, but its components were disassembled and reassembled as a house. And later, the house might be dismantled and reassembled as a bridge over a narrow chasm.

Still the same physical materials, but in neither case would Theseus, nor even the shipwright, confuse the new construction with anything that had preceded it, neither in its form as a new ship, or as a rival for recognition as any ship at all.

By this time we are in a position where we are tempted to assert that if we adhere to standard concepts, the question of the identity of objects is one of semantics rather than of physical fact, and the semantics are context-sensitive. And that is a valid example of one semantic view, but it is not the exclusive essence of the matter; in fact it is not the operative concept in this ship's history; in each case the difference would have been the effects of emergence, and the continuity of the interrelationships between the components participating in the emergent effect.

Many writers in this field (I do not assert that those ones are philosophers) either dismiss the Ship of Theseus problem, or make heavy going of it, practically mystical. And yet, pretty problem though it is, they generally fail to recognise its significance.

The problem is the failure to recognise that many of the entities that we recognise in our world are emergent, and, in their emergence, are distinct from their parts. A novel is not a jumble of letters. A book containing the novel is not the novel either, and the letters are not the book. If I erase a suitably chosen “E” from the text of a copy of “The War of the Worlds” it will not cease to be that novel, and if I then replace the “E” it does not suddenly turn into a new novel.

Entities that are comprised of arrangements of other entities, and that we recognise as particular entities, exist in their own right (in case you have forgotten my definition of existence, better return to re-evaluate it). That means that while the cloud in the fixed position over the mountain remains, it is the same cloud even if the water inside it is no longer the same as it has been just a few minutes ago, and now forms another cloud downwind. The ship still has the same shipness, the same identity, if a plank has been replaced, and even if a new mast has been added.

Such emergent essences as the shipness, houseness, cloudness, or other attributes of compound entities have their own world lines, and the world line of a compound entity changes in its three-dimensional cross section or silhouette as its history proceeds. The Sequoia tree that now towers over the surrounding forest, having grown from a seed that germinated possibly thousands of years ago, never discontinued its treeness, its Sequoia-ness, in all that time, but if we could look at photographs taken of that tree at five-hundred-year intervals, we easily might think them to have represented different trees. It started as a seed with a mass of a few milligrammes, and now masses roughly a billion times as much.

 

Prediction of the Knowable and Unknowable

You taught me language; and my profit on't
Is, I know how to curse. The red plague rid you
For learning me your language!

William Shakespeare: The Tempest

In this essay I use the term “emergence” in a small number of limited and largely mechanistic senses that each time suffice for the then current context. For my purposes, largely contemptuously, I ignore other senses of the word. I also ignore arguments on the point, as being irrelevant, whether sound or unsound. In fact, I had intended to ignore the controversies altogether, but some are so widespread and so unsound that I found I had at least to concede them the discourtesy of explaining why I dismissed them and accordingly why they are irrelevant here.

Among the commonest delusions about emergence, is that, to be emergent, effects must be intrinsically unpredictable by deduction from the nature of the component entities of any system that produces any emergent effect. This is mental confusion with the basic concept, which is that: an emergent effect cannot emerge from any system simpler than the minimum necessary to generate it, whether the emergence is regarded as predictable or not.

Furthermore, predictability is not a well-defined term in this context. Roughly speaking, it is a relationship between the subject (the predictor) and the object (the emergent effect). Just because one predictor cannot see a world in a grain of sand, does not mean that no one can; or that no one could inspect a hydrogen atom and an oxygen atom, and predict an ocean.

Consider our paper and ink for example: omit either, and no writing emerges, and without writing, no message emerges. Extra paper plus extra ink permit greater ranges of messages to emerge. When the shipwright was putting the alpha vessel together, it would not have occurred to him either to predict or wonder whether he was in fact working on the components of a house or a bridge.

Extra confusion could arise from the fact that some emergent effects might be real and distinguishable all right, but not be observable or explicable to all recipients without special abilities or equipment.

We already have considered the remote islander who is unequipped to imagine the function of a radio transmitter, of which he can see nothing more functional than some dim indicator lights; and he has no idea even that the lights are there for information, not illumination.

How much better-off are we in our turn? We observe subjective consciousness as an emergent effect of the human brain. No one has yet produced a cogent and coherent explanation of the nature or function (if any) of that subjective consciousness, but we routinely demonstrate in multiple ways, that any physical effect on the physical function of the brain, such as from drugs, sounds, darkness, education, or violence, affects the subjective consciousness. In that respect we are not much superior to our islander who lacks comprehension of the radio transmitter.

One popular example of the impossibility of predicting the nature of an emergent effect, is denial that anything one could observe about an isolated water molecule could enable anyone to predict that having huge numbers of such molecules could give rise to liquid, to turbulence, to solid ice, oceans, flow, waterfalls and all the emergent effects we observe in the amazing behaviour of one of the simplest molecules in nature.

Assertions of such impossibility or possibility, I see as not only arbitrary and. intrinsically nonsensical, but also irrelevant to the question of emergence as I view it. The denials are not logically compelling, either as a matter of principle or as a matter of fact or even of non-trivial interest. Such things cannot be established by bare assertion.

They also are not useful distinctions between emergence and non‑emergence. No matter how complex the emergence, whether intrinsically predictable or not, emergence never has been demonstrated to occur in any way other than by physical and causal relationships and interactions. Accordingly the predictability of emergence is limited only by:

  • the knowledge of the nature of the interactions,
  • the information available, and
  • available computing power.

Given those, the interactions are necessarily as predictable or observable as anything else in physics, and the available information limits nothing but the precision of the outcome. The fact, for example, that water molecules behave consistently in all circumstances, implies that they behave according to their operations in the algebra of physics, and that our inability to predict all the consequences of their behaviour, reflects own limitations of information or of computing competence; the molecules, and the systems in which they participate, are not similarly limited, so the predictability of the emergence is not intrinsically absolute.

My rejection of the assertion of the impossibility in principle, of the prediction or explanation of emergence, leaves me with the need to clarify where that leaves us with prediction of emergence, and to explain emergent events where they occur. Commonly such prediction and explanation are qualitative rather than precise, but that limitation applies to other physical predictions and explanations as well.

For example, in principle a rainstorm as an emergent event can be predicted from sufficient knowledge of the physics of the atmosphere, including the physics of water, and that knowledge explains the event as well. But in practice rainstorms are not nearly predictable in fine detail, such as where each drop will fall, though we nowadays can predict weather with accuracy that would have been incredible in my own youth. However, whether we, as humans, are able to predict or explain rainstorms, does not define the principle of their emergence.

Analogously, consider a simpler system: we can predict that, given a chessboard and 32 dominoes, each of a size to cover two adjacent squares, it is possible to cover the board with them. However, we cannot predict exactly which pattern any particular person will choose to cover the board with; the number of possible patterns is very large. We know that if we remove certain combinations of squares from the board, then we can cover the remaining squares with fewer dominoes and with every domino on the board fully covering exactly two squares, but that removing certain other combinations of squares will render impossible, any pattern that exactly covers every square and no more. However, of those combinations that do permit perfect covering by dominoes, by far the most will permit many patterns of dominoes.

Most also will require a certain amount of experimentation or insight to exclude though all will depend on the relationships between squares, with the outcomes being emergent from those relationships. To prove it by my criteria, changing the relationships would change the problem, but in all cases, with sufficient thought one can tell which arrangements would meet the challenge.

Another form of emergence can arise from situations in which it really is not in principle possible to predict certain aspects of the outcome, because determinism is not involved. We can predict confidently that a sharp, symmetrical chromium pin balanced vertically on a the tip of a similar pin in a vacuum, will tumble sooner rather than later, and that it will in fact tumble rather than falling smoothly and vertically, but we can neither in practice nor principle predict the direction.

We would have enough difficulty no more than balancing the pin in the first place.

This pin problem is an example of symmetry breaking, a very general class of problems in physics and philosophy. Commonly their precise outcome is unpredictable either in practice, or in principle; this is because the information necessary for the prediction simply does not exist, not because existing information just happens to be unavailable.

Furthermore, even when we can manage some partial prediction and explanation, it does not follow that we can do so completely. We cannot generally predict the exact extent and time of a rainstorm, nor how many drops shall fall in each yard. But that inability, much like our inability to predict the details of the falling of our chromium pin, does not deny our ability to predict certain aspects of emergent effects.

Another question is that of the concept of "downward causation", the question of whether an entity whose existence is the consequence of the interaction of more elementary entities, can affect the nature or behaviour of its own component entities. This question has paralysed and fascinated generations of philosophers, causing them to waste much ink and bandwidth. They tend in particular to be obsessed with “mind” as an emergent effect (which I do not dispute in several senses at least, though I do not undertake to define mind, and though I do not accept that any concept of mind that I have as yet seen, is either definitive or fully explanatory).

The relevant point here however is that mind, whatever it might be, in some sense affects the behaviour of body. In other words mind and body physically affect the nature and actions of their own components, and those include examples of downward causation. An example of an observable difference in behaviour, as a result of downward causation, is that harbouring of a conscious mind (or not, as the case might be) could affect the choice of reply by the body that might host the mind, to a question such as “are you aware of any subjective impression that you have a conscious mind?”

The question of the comprehension of the question by the entity questioned, and of the honesty of the reply amount to a different matter; what is relevant is that if an entity has such a conscious impression, and can process the question in terms appropriate to a reply, it can answer "yes", and if not, it can answer "no". It then would be reacting according to its own emergent subjectivity or lack of any such thing. That would be another example of downward causation.

Let us ask ourselves: is downward causation, or feedback, surprising? The principles of feedback, both positive and negative, are very familiar. Let’s not waste time on the emergence and downward causation of mind as such — mind and its causation being too poorly defined, and possibly too difficult, for us to deal with at present; we can however model downward causation with simpler emergent systems.

And, surprise! In practice we find plenty of examples of both downward and upward causation of different levels, intensities and kinds. In the epigraph I quoted Lewis Fry Richardson’s remark from a century before my time of writing, concerning turbulence in the atmosphere. Turbulence of all kinds is a typical example of emergence: you cannot get turbulence from isolated particles. But when you have a fluid of huge numbers of particles in turbulent motion, that turbulence consists in numbers of vortices, where each vortex is an entity, an element, that emerges from the relative motion of large numbers of molecules. Each vortex affects the behaviour, not only of elements at higher levels, but of elements at lower levels in the turbulence as well. Their interactions cause effects of momentum, charge transfer, density, refractive indices, temperature, density, state, and more.

In short, feedback in every possible direction. That is hardly surprising, in the light of fact that it is arguable that every interaction in nature affects both entities in the interaction.

Now, some argue that if we have upward as well as downward causation, then that will imply circular causation, but that is only true if one is slovenly in the definition of entities and instances of entities, not to mention circularity. I say more about entities elsewhere, atomic entities such as electrons and humans, and highly tomic entities, such as crowds and clouds, but let that stand for now.

Meanwhile, when critics attribute circular driving of events, or circular causation, to emergent systems that entail both upward and downward causation, they overlook the question of the sequential nature of causation. Although some people deny the concept of cause‑and‑effect entirely, no one I know of denies that whatever we might reasonably describe as cause, does precede effect, and that each effect in turn is a new cause that will have new effects. But when the new effect is of the same nature as the original effect, does that imply circular causation?

Hardly — the effect might indeed be of the same kind as the cause, but it is not the same instance as the cause. So such a system is not circular, just a sequence of events in a cyclically driven system: a different concept — stop driving the system, and it will eventually stop or diverge from its cyclic pattern of behaviour. For examples, consider eddies in a stably turbulent stream, or the behaviour of a piston in an engine: stop the stream or stop the fuel supply, and the sequence of events, of emergent effects, stops as well.

There is nothing circular about such a cyclic nature.

Emergence and epiphenomena

So, naturalists observe, a flea
Hath smaller fleas on him prey;
And these have smaller fleas to bite 'em,
And so proceed ad infinitum.
Thus every poet, in his kind,
 Is bit by him that comes behind.

Jonathan Swift

The relationship between emergence and epiphenomena is another popular topic. As with most concepts in this field, definitions of epiphenomena vary, but roughly speaking, an epiphenomenon is something caused by a system, but that does not affect the system that caused it. Again, some definitions of emergence exclude epiphenomena from emergent effect status, but this too, I reject categorically, as being irrelevant as well as questionable. A phenomenon either emerges (that is to say, as I see it: is emergent from) its system, or it does not. Whether it has any functional or causal connection to the system from which it emerges, is neither here nor there. It is in any case difficult to demonstrate that something is absolutely epiphenomenal in the sense of being functionally or causally independent or irrelevant.

For example consider an early example of an epiphenomenon, namely T.H.Huxley’s steam whistle on a locomotive. It is a favourite example attributed to him, though he never used the term "epiphenomenon"; in his day it was not much used in that sense as far as I know, usually being used as a medical term for irrelevant signs or symptoms. But to be sure, Huxley used the whistle in the sense of an epiphenomenal item: he asserted that the whistle does not affect the function of the engine, though anyone unfamiliar with the concepts might think it more essential to the train than the far less striking and dramatic internal bolts and pistons.

Well, the whistle certainly is not necessary for the propulsion of the locomotive, and in fact you would notice no effect on the propulsion of the train if you removed the whistle, except perhaps an increase of delays due to collisions, but if the steam jet out of the whistle were directed either forward or back it could in principle have some propulsive effect, either positive or negative, and in practice the whistle’s consumption of steam has a greater effect on the propulsion of the locomotive than the sound of its whistling has.

And neither the whistle nor the steam propulsion would have occurred at all if there were no subvenient system such as the steam locomotive. The subvenient system is the system on which the supervenient system depends. Without the locomotive and its steam supply, the whistle cannot whistle. The example is trivial I accept, but to claim that to be emergent, an effect must neither be epiphenomenal nor include downward causality, is pure mental confusion, non‑cogent and unsupported.

Nor may we be too confidently dismissive of such notionally epiphenomenal effects. Some of the early small steamships actually could exhaust their steam supplies by sounding their horns too freely, and remain helpless till their steam pressure built up again.

Levels of emergence

The only thing to prevent what's past is to put a stop to it before it happens
Boyle Roche

More: in contradiction to another class of claim, there certainly can be multiple levels of emergence. Consider water molecules once more: isolated molecules behave fairly simply according to the local environment of radiation, gravity and the like. Add enough molecules, and there can be collision, attraction, and repulsion effects, and more. When enough molecules are close enough to each other, then, as the concentration of molecules increases, gaseous behaviour emerges, including convection, transmission of sound, and turbulence. With sufficiently increased concentration of molecules the emergent effects include condensed matter in various forms of liquid or solid: droplets or crystals of one type or another. We also could elaborate the discussion into consideration of solution or chemical reaction.

Droplets have their own emergent behaviour in several forms that differ from the emergent behaviour of individual molecules. The emergent behaviour types of the droplets also differ from each other as they grow in size and as their environment changes. At first surface tension and cohesion dominate gravity, momentum, volatility and various resonances. As the number of molecules increases, the other effects begin to dominate the surface tension and we pass through other effects that, among other things show both upward and downward causation, though still with little effect on the nature of the molecules, and still less on the nature of the atoms and their sub‑atomic particles. First we get larger drops, then masses of water that will flow through sieves and tubes, and form puddles, lakes, and oceans with waves and surf.

Eventually the sheer gravitational effect leads to downward causation through pressure. Given enough pressure, the very molecular structure begins to break down; at its centre, the behaviour of an Earth mass of water would be very different from its behaviour near the surface. In a Jupiter mass the behaviour would differ even more, and a solar mass of water would turn into a star, in which the deeper molecules will generally break down into states other than water molecules, and in fact other than hydrogen and oxygen atoms.

Again, although at any level of emergence, one can trace the limiting or defining effect of the ultimately atomic components, more complex systems require more complex components, even when those components have the same ultimate components. We could use the same cellulose to make a rope or a piece of paper, but if you asked for paper and I gave you rope, you would not be grateful.

Yet again, the same million tonnes of water in a few thousand cubic kilometres of air has drastically different effects on the local weather when it is present as vapour, from when it forms a raincloud. The same biochemical and physiological materials, when present as a gorilla, or a man, or a woman, or a different person, cannot in each social, political, physical, or legal situation, fulfil interchangeable roles. There are entity classes in which a woman or a gorilla might play equally appropriate component roles, but in most everyday entities, where the required component is "woman", "gorilla" will not do.

In short, at every level of the combination of the elements of the system, or the combination of emergent entities in a system, new classes of emergence are possible and occur commonly, whether they result in downward causation or not; new levels of emergence quite commonly run counter to previous levels by negative feedback, destroying the very emergent effects that created them. For example, a flame might consume the supply of fuel that led to its own ignition, or its products could smother the reactions that support the combustion.

Those items include examples of emergence by both upward and downward causation.

And yet there are certain classes of effect that do not appreciably emerge in most conditions that we can easily imagine. To be sure, in compression effects, electrons are squeezed closer together and may be accelerated, released, or confined, but their individual charge and mass are not affected, probably not even in a neutron star. The fact that there is some downward causation does not mean that it is all downward causation. Some of the emergent effects affect in one direction and some in the opposite direction; some no doubt in both. But in either case, denial of the possibility of predicting any particular effect is something that cannot in general be justified in principle by arbitrary assertion.

 Now, given an adequate description of any entity or class of entities, it is possible in principle to predict certain aspects of its emergent behaviour. For example, we can predict that regular polygons when fitted together according to certain rules can form only certain classes of regular polyhedra. We also can predict of certain classes of irregular or partly irregular polyhedra that they cannot form in two or three dimensions. Such exercises are at first simple, but related exercises can become very tricky: one example is the tessellation of planes with irregular polygons such as pentagons or Penrose darts and kites. At the time I write this, the tessellation of pentagons has not yet been fully described mathematically.

Generalised polyominoes too, produce combinatorial problems of rapidly growing complexity, and so does the adoption of shapes of organic molecules, especially of very complex organic molecules.

However, except in their computational complexity none of those presents any obstacles in principle. Though their effects are undebatably challenging to our currently standard computational tools, we can construct or imagine tools or toys that model such emergence. Accordingly, the problem is by no means incomputable or unpredictable in principle. Bouncing marbles can compute sphere packing to some degree of precision, and soap bubbles can produce shapes that are killingly difficult to model mathematically or algorithmically.

The fact that some of them are too complex for humans to compute, or that it would be too expensive, is a separate issue. Consider NP‑complete problems such as the travelling salesman problem: we generally can solve them by some algorithm or other, but as the problem grows larger, the resources required, especially the time needed, explode, and to provide and prove a perfect solution to a travelling salesman problem of just a few hundred towns would take more than the estimated age of the universe. In other words it is to a good first approximation impossible.

But it is simply mathematically possible.

We can improve our efficiency by improving our programming techniques such as annealing or genetic algorithms; we can combine astronomic numbers of processors to work in teams to solve the problem in a fraction of the time, but given a surprisingly small problem, we still would need billions of years to get the theoretically optimal answer.

But none of that changes the fact that all finite NP problems are computable in principle even if not in practice.

However, there remain other aspects to emergence — to at least some classes of emergence.

 

Generic and Specific Emergence

How hard is it to solve problems if you only care about
getting the right answer most of the time?
For everyday problems, answers that work most of the time
are often good enough.
We plan our commutes for typical traffic patterns, for instance,
 not for worst-case scenarios.
Anonymous

 

To clarify the practical significance of at least one aspect it is necessary to distinguish between generic and specific events, effects, or entities.

Perhaps surprisingly, the difference between generic and specific effects is arbitrary: in a nondeterministic, underdetermined world such as ours, every apparently specific effect really is a whole class of indistinguishable outcomes. To a high degree of precision we know the shape of a pile of sand formed by grains trickling through a tiny hole, and the shape of the splash when a falling drop of milk strikes the surface of still water, and the trajectory of a steel ball bouncing on a concave steel surface, but to regard their apparent precision as perfect would be delusional: on a sufficiently microscopic scale each such event is effectively unique.

None the less, the effects are close enough to deterministic to satisfy most everyday needs. When the billiard ball goes into the pocket, we do not much concern ourselves with details of impacts or vibrations, with how it bounced on the cushion on the way in, or which way up or touching which particular fibres it came to rest.

Generic effects might include the gravitational, geometric and electromagnetic attributes of water molecules leading to their attracting each other and forming liquids with surface tension, and droplets and oceans, and crystals and so on.

But which droplet, ocean, or crystal in particular? In quite modest systems we have more options than any realistic computational power could predict or model, not even if we ignore that many such outcomes may be ephemeral, lasting only fractions of a second before settling into stable emergence.

Certain classes of emergent effect might not in principle be predictable from the source entities one at a time; in fact, some philosophers shear a lot of hogs in addressing that assertion and those particular philosophers furthermore assert that an effect is emergent only if it could not have been predicted from the nature of its components. To be emergent, it must be "novel" in nature, they say.

I of course, deny that any such a stricture is logically necessary or even necessarily coherent. As I see it, if the impossibility of prediction of every form of emergence necessarily did follow, then to establish that impossibility would be no easier than many other negative assertions.

I accept that a particular outcome might not be predictable in advance, especially in chaotic systems: one might not have predicted the emergence of life on the surface of a particular partly watery planet or, granting that one did foretell its emergence, fail to predict which life forms would emerge, or how long the emergence would take; but that follows from the scale of the number of forms that are reasonably possible, not the principle of emergence. One also could not precisely predict the form of an accumulating pile of sand, but still be pretty confident of its plesiomorphic form.

The impossibility of predicting the emergence of something like life on a given young planet does not follow cogently from the general nature of planetary accretion. By way of analogy, the possibility of computer viruses was predicted before they emerged, and before most people outside the field could comprehend the concept, and in fact some programmers even denied the possibility for some time after the first viruses were “released into the wild”.

But the very fact that computer viruses were predicted, and that those actual predictions turned out to be a major factor leading to the very creation of the viruses, shows that they could in principle be predictable, even though the predictors could not have predicted the particular code or objectives of the various viruses that would emerge.

And that is enough for my purposes. Their predictability in principle, given their possibility, is not generally an attribute of the emergent events themselves, but of the predictors. Furthermore, the term “prediction”, is ambiguous: it could refer to. a specific event, such as an unstable atomic nucleus decaying in a particular way at a particular time, or a generic event, such as sand forming a mound as it lands in the lower bulb of an hourglass, or a liquid. forming a flat surface or a meniscus, depending on the nature of the liquid and other factors.

The very concept of predicting emergence of types of behaviour is fuzzy, but what most matters here is how well we can predict the nature of events, not the details. We might be able to predict the resting position of a marble rolling into a hollow, but not all predictions are equally easy. If from the nature of isolated entities of particular units, such as molecules, one might predict the nature of their interaction to produce effects such as liquid, gaseous and solid behaviour, then, whether chaotic or contingent, details matter less; in predicting splashing, we do not undertake to predict the trajectory of each droplet.

Given that one knows enough about the mechanisms involved, we might well predict that some items are not possible to predict. For example we might indeed know enough to predict some events or “effects” (generic events, classes of events, or “trends”) emergent from the reaction between atoms and molecules. We might indeed know enough to predict how putting them together could produce surface tension and gases, liquids, or solids. A pile of marbles does certain types of things that a single marble or a scattering of separate marbles will not do.

The same is true of a pile of bricks, but different piles of bricks can have more complex emergent effects than piles of marbles can. Looking at a pile of bricks could put one into a better position to imagine a cathedral, or propose its building, than looking at a pile of marbles could. And we can predict that the friction of gently tumbling a pile of sound bricks for long enough will first erode the bricks into prolate ellipsoids or spheroids plus dust, depending on the shapes and constitution of individual bricks, whereas continued tumbling could produce accurate spheres.

For practical purposes, no matter how predictable or unpredictable from first principles they might be, all such effects can be viewed as variously emergent, either specifically or generically. And what emerges can differ in indefinitely various and marvellous ways.   

Let us consider some classes of those emergent effects.

A scattering of scrabble letters on tiles does not have the same effect as the same letters arranged into a legible statement. A classical example is Hooke’s anagram of his announcement of his discovery of his law of elasticity: “ceiiinosssttuv” for “ut tensio, sic vis”. We might equally well reassemble the letters as “sosiitctuseivn” or “cute visionists”, or in Laputan: “Isi etsti sco vun, each with its own significance in a suitable context.

Whether to regard such outcomes of arrangements of the same components as emergent or not is largely a semantic question. I insist on calling them emergent because they would not be possible with fewer or different components. Furthermore, if the set of components is large enough, it is combinatorially infeasible to predict every possible outcome.

For illustration consider the following block of letters:

AAAAAAAAAAAAAAAAAAAAAAAAAAABBBBCCDDDDDDDD
DDDEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEFFFFFG
GGGGHHHHHHHHHHHHHHHHHHHHHHHHHHHIIIIIIIIIIIIIIIIII
IIIKKKKKLLLLLLLLLLLMMMMMMMMMMNNNNNNNNNNNO
OOOOOOOOOOOOOOOOOOOPRRRRRRRRRRRRRRRSSSSSS
SSSSSSTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT
TTTUUUUVVVVWWWWWW

Let’s ignore punctuation, formatting, and similar complications as needless distractions. They could be included without affecting the generality of the question, but can those letters be rearranged to make a sequence of words? Could they be English Words? Could the words be arranged to make sense? Could the arrangement be anything memorable? Would it be an emergent effect if it turned out that it could?

Well, they certainly could; as it happens, those are the letters in an Authorised Version Old Testament passage from Habakkuk, for convenience omitting spaces and other punctuation:

WHAT PROFITETH THE GRAVEN IMAGE THAT THE MAKER THEREOF HATH
GRAVEN IT; THE MOLTEN IMAGE, AND A TEACHER OF LIES, THAT THE
MAKER OF HIS WORK TRUSTETH THEREIN, TO MAKE DUMB IDOLS? WOE
UNTO HIM THAT SAITH TO THE WOOD, AWAKE; TO THE DUMB STONE,
ARISE, IT SHALL TEACH! BEHOLD, IT IS LAID OVER WITH GOLD AND
SILVER, AND THERE IS NO BREATH AT ALL IN THE MIDST OF IT.

 Good luck to anyone who would seek alternative arrangements that make sense or convey message or passion. To argue that that passage of invective was not an emergent effect flies in the face of anyone who claims that it could.

Conversely, it is easy to see that the same letters could not be rearranged to produce say:

Power tends to corrupt and absolute power corrupts absolutely. Great men are almost
always bad men, even when they exercise influence and not authority: still more
when you superadd the tendency or the certainty of corruption by authority.
There is no worse heresy than that the office sanctifies the holder of it.

However, the same ink and paper certainly could be arranged to say either one passage or the other, or any arbitrary message of an appropriate size.

Some philosophical schools sniff at the concept of emergent effects, rejecting the idea as too vague, but I unapologetically assert that the foregoing principles adequately justify the term and the concept. The very vagueness they refer to reflects the thinking of the critics as strongly as that of the emergentists.

The first thing that it is helpful to understand, is what it is that emerges in or from an emergent event: what emerges is an entity: an entity that had not been there before, or had had fewer components, and probably lower complexity.

 

Emergence, Scope, Scale, and Type

There is nothing that living things do that cannot be understood from the point of view
 that they are made of atoms acting according to the laws of physics.
                        Richard Feynman

Furthermore, the nature of what emerges may change rapidly with increasing scale.

The concept of scope emerges. To begin with, think again of "few". What does “few” mean: ten particles? Or ten to the ten? Ten to the ten may sound like a lot: ten thousand million, but in a cubic metre of space, ten to the ten particles such as hydrogen molecules is very small. A fairly good vacuum in fact. Keep pumping particles in, however, and all sorts of new entities emerge progressively.

Interestingly, the first aspects of gas‑like behaviour, as opposed to the behaviour of independent particles, begin to appear when there are something like a dozen particles in close proximity. By then the behaviour is not critically affected by the addition of an extra particle; we already are dealing with a "substance" rather than just a set of identifiable particles. This is different from when we changed to successively a two, three, or 4‑body ensemble and each new addition changed the rules. True, every addition does affect the potential effects, but with each addition the relative magnitude and effect of the change grows less dramatic.

Let us first consider everyday gases. In a vessel of modest size we might have say, ten to the 25th gas molecules.

And gases in modest‑sized containers imply pressures, and gases under pressure can support convection, currents, laminar flow — winds, if you like. And sufficiently intense currents imply turbulence. You cannot get winds or turbulence where there are not relatively many particles close together. Nor, similarly, can you get condensation that creates phase changes to liquids or to solids. Nor crystallisation.

Conversely, where there are enough particles, you can get gravitational attraction, and planets, and suns, and solar systems, and galaxies ...

All those are emergent effects that are very different from what we would expect from the same numbers of particles in a space in which the individual particles have no special effects on each other, or are on average far enough apart to behave effectively independently.

Furthermore, the nature of entities emergent from given sources may differ in various ways from that of the entities from which they were derived, and from each other. The parent entities might remain unchanged or not: for example, in emergently forming a tetrahedron, ten stacked balls remain obvious as balls. Dry sand poured from a leaky sack might form a pile with a normal bell‑curve profile, but it visibly still is sand. A house of bricks (ignoring various practicalities) still may consist of typically visible bricks, undeniable though their houseness may be. A surface of Scrabble tiles placed randomly form a plane but, placed deliberately, they can also form an arbitrary message. And that message can be copied in ways that have nothing in particular to do with scrabble or tiles.

However, in their intrinsic nature some emergent effects have little in common with the parent entities. Carbon, hydrogen, nitrogen, and oxygen can be combined in various ways to form various compounds, most of them unremarkable, but in a particular form, suitably supplied with free energy, they can combine to form TNT. TNT is a substance from which one can form bricks that do not look special, but, a structure formed from TNT, suitably ignited or detonated can produce an event such as a fire or an explosion. Neither a fire nor an explosion is TNT; in fact, without proper information, one might not be able to tell whether the transient emergent event that comprises the explosion or fire, did emerge from TNT or any one of many other compounds or mixtures.

In contrast, from those same carbon, hydrogen, nitrogen, and oxygen atoms, one could produce any of the amino acids that the human genome encodes for, except a few that also contain sulfur. None of them share much resemblance with TNT, none is as toxic or as explosive for example, and in one form or another we need them or certain of their precursors or compounds in our food.

Different yet again, but still consisting of. carbon, hydrogen, nitrogen, and oxygen atoms, are the various types of nylon polymers. None of them is a human food, nor explosive.

It is all in the attributes that emerge from combination of even identical elements.

There are even more abstract examples. Consider a massive accumulation of electric charge in the air, say a mass of electrons. Beyond certain limits the system breaks down and the charges escape through the air, causing a flash of light and peal of thunder. Neither the light nor sound resembles electrons in any particular way.

Note that, among the types of emergence, we get entities with limits of various sorts: boundaries in various dimensions. As a rule the individual atomic particles that we originally considered would be notionally or potentially immortal, but consider the things that happen when enough particles get together to form a gas cloud.

In a gas cloud one commonly might get vortices if the conditions are right. But a vortex is not generally immortal and occurs only when enough (potentially immortal) molecules with enough of the right individual momentum jostle each other suitably. And a vortex lasts only as long as the right energy is applied in the right way to maintain it. This means that among the limits to the dimensions of some entities, one common limit is temporal: meaning in the time dimension, even if the material entities that compose the dynamic entity are immortal.

In short, entities in combination beget, not only new entities, but new classes of classes of entities. First one gets relationships between entities and later one gets relationships between relationships. Think of all the so‑called "abstract" entities one gets in maths and logic. Operations, values, variables, functions, relations, theorems — the list goes on.

 Note that the concept of boundaries does not imply that a boundary is necessarily indefinitely sharp nor simple, nor that every example of a boundary is of the same type as every other boundary. For example, a boundary might be in location: go beyond that boundary and you are in another yard, another state, another country, in water instead of on shore. Go beyond a boundary in time, and you are old instead of (relatively) young. Go beyond a boundary in temperature and time, and you have burnt your toast or melted your ice cream or frozen your feet. Go beyond the limits of your neighbours’ toleration, and you are in a state of conflict.

One might employ fuzzy theory to decide where or how strongly to apply the idea of a boundary. All it comes down to is that in a given set of dimensions, as you pass further from one coordinate to another, you are more definitely on one side of a given boundary than on the other.

Now consider again the concept of splitting. It is of course related in some ways to the concept of boundaries, because every split involves some part ending up on one side of a boundary, and the rest on the other. But I am not in this part of the discussion emphasising that correspondence in most examples.

I suggested earlier that there are presumably fundamentally unsplittable entities (electrons, neutrinos etc). If we take a lump of lithium for example, and halve it, and cut one of those halves in two again, then after perhaps some 70 or 80 halvings, depending on the size of your lump, you are left with a lump of two lithium atoms. No conceptual problem.

Halve that, and you get two separate atoms. So far still so good, but what now?

Well, you can cut one of those atoms again, but this time you don’t get neat halves: on one side one, two or three electrons will go shooting off, and on the other side you would get a lithium ion with a net charge that depends on how many electrons you have removed.

But you have to ask what you mean by calling that splitting. Certainly you have done some separation, just as you had with the previous splittings, but each of the previous splittings gave you a lump of lithium on either side, each lump consisting of lithium atoms. Now, with this last splitting, you no longer have a lithium atom on either side. A lithium atom has six or seven (depending on the isotope) hadrons in its nucleus, plus three electrons in orbitals around that nucleus. Split off any of the electrons, and from some points of view you no longer have a lithium atom, though any freely passing electrons could quickly repair it by latching onto the nucleus.

What you have is an ion. And ions behave differently from atoms.

Some people would regard that as a quibble in most contexts, but rather than waste time on further quibbles, suppose you next split the nucleus. The nature of the nucleus (specifically, its number of protons) is what makes it lithium instead of any other kind of atom, much as having four corners distinguishes a square from a triangle or a pentagon.

Splitting a nucleus, such as that of lithium, is physically possible, For example, you can strike it with a sufficiently energetic proton or neutron. You wind up with two smaller atomic nuclei, say one deuterium and one helium. Yes, you have split that kind of. atomic nucleus, and what you have got is two atomic nuclei.

But these two new nuclei emphatically are not lithium nuclei, let alone lithium atoms.

In this sense the lithium atoms were indivisible. Like a soap bubble. Try splitting a soap bubble, and unless you are very skillful, you generally wind up with no bubble at all, just some droplets or flakes.

Try another example. Suppose you take a nation like Britain. You split it into two populations of some 32 million or so. Then split one of those into 16 million.

You see where this is going.

After some 28 splits you are left with one man or woman. Splittable?

Perhaps, in a sense, but previously, every time you split a population you got two populations, each with about half as many British citizens as before you did the splitting. This time if you continue to split into less than one citizen, all you get is no citizens plus some pieces of carcase.

In that sense a human is atomic. Splitting a human does not generally give you two humans. Even when a woman gives birth you do not wind up with two of what you started out with.

We could define. an atomic entity as: one that cannot be split in any sense (like a neutrino perhaps) or that changes its character when split at all, or when. split in other than particular ways, depending on the sense.

So what then?

Tomic Entities, their Origins and Fates.

I believe in calling a spade a spade, but that is no reason
for calling everything a spade
Anonymous

So here we run into some really messy fields of concepts.

As I use the word, an entity is something one can refer to meaningfully.

As I see it, a tomic entity could be a mass of. substance or of multiple parts of such a nature that if one splits it, each part is another entity of the same type in the same context, such as the lump of. lithium I mentioned before. Same with a lump of glass. Break it and you get smaller lumps of glass. But if that glass had been a windowpane or a goblet? The glassy substance would be tomic: you could split it into smaller pieces of glass. But the goblet would not split into two goblets.

And by the time you get to a single molecule, you no longer have a glass; the glassy state depends on your having multiple molecules adhering to each other in suitable ways; it in no way demands any particular change to any particular molecule.

Again, consider a cloud above a mountain peak. It is effable: one can refer to it verbally: “Look at that cloud.”

“Which one?”

“The one over the rounded peak on the left, not the little one over the sharp peak.”

Such a cloud is something one can refer to without obvious difficulty, and the listener can identify it easily — for a while anyway.

Similarly: “Look at that Manhattan crowd coming down Seventh on the way to Second”

“What about the crowd coming up Second avenue on the way to Seventh?”

We can tell that both speakers agree that there are two crowds. The two crowds seem likely not to know about each other and are moving at right angles to each other. Each crowd might be shedding some members on the way, and gaining other members, probably without affecting the situation very drastically. In this they resemble the clouds passing over mountains. But what happens when the two crowds meet at Second and Seventh? There are several possibilities:

  • The crowds might merge and continue as one.
  • They might interpenetrate and continue on their independent paths. While they were together, moving largely independently, would they be one crowd or two, and which observers would decide one way or the other?
  • Assuming that they do interpenetrate and then separate to pass on their ways, it is possible that not all of the members of each crowd stay with their original companions; each crowd might lose some members and gain some. Now how many crowds do we recognise? And are any of those crowds the same as they were before they met?
  • The crowds, or their members, might notice that they were carrying rival banners, then fight and in the fighting disperse into many smaller crowds on largely independent paths. Some member or small group of one crowd might find himself or themselves isolated in the fight, and run away. Do we call each such isolated individual a crowd too?
  • They might on the other hand find that they were carrying the same banners, and all join into one larger crowd.

To save anyone the trouble of raising the topics of fuzzy logic and sorites, yes, I do understand them; let that go! Theirs are not the current points at issue.

Now, a local, isolated cloud over a peak commonly is created by a wind blowing up one side and cooling as it rises, till its water vapour content condenses into the component droplets of the cloud. That is what makes the cloud visible: a single droplet is not much of a cloud and in most ways does not behave like a typical cloud at all.

Coming out on the other side of the peak, the leading edge of the cloud generally drops down, and, as a result, it heats up, so that the droplets evaporate again, and the altitude at which they vanish is the downwind limit of the cloud. As a rule, the life of any droplet in such a small cloud with the wind blowing through is just a few seconds, and within a minute or so, not a single one of the original droplets still is in the cloud.

Is it still the same cloud?

That question is related to classic questions such as George Washington’s axe and the Ship of Theseus, but to pursue the theme here would be too much of a digression.

Such a cloud might grow, till two neighbouring clouds’ boundaries collided and merged, much as our Manhattan crowds merged.

Both those clouds and crowds definitely did not always exist. They might disperse, or evaporate or otherwise become unavailable to our observation.

And yet their behaviour patterns that resemble each other so temptingly, involve some radically different principles as well as some essentially nearly identical principles.

One thing that such clouds and crowds have in common is that they have boundaries in time and boundaries in space: wait long enough and they pass; run far enough and fast enough, and you leave them behind.

But still, their boundaries are imprecise and the identities of their components commonly are imprecise. The droplets in the cloud might look sharply defined, but they are growing or shrinking, oscillating or attracting solutes all the time. No man’s regard ever has contemplated the same droplet twice, as Heraclitus might have said, but never did say. Droplets or people near the boundaries of clouds or crowds might be seen as members by some observers, but other observers might exclude them. A droplet or a member leaving a cloud or crowd could be seen as a splitting event, or as dispersal.

As for their origin and dispersal, in the clouds and crowds, those resemble not only each other, but the origin, merging, and extermination of species or nations. Such populations begin to collect or grow from particular times and places, they expand, and move through time and space;. seconds or minutes or years, or millions of years later, they tail off or die gradually or catastrophically.

 

So What?

When we try to pick out anything by itself, we find
 it hitched to everything else in the universe.
John Muir

 So what all those types of entities have in common is:

  • they have mutual relationships between themselves and their generating circumstances, if any, and
  • they have mutual relationships between their various parts, if any, and
  • they have particular relationships to the rest of the universe, and
  • they have particular relationships to their observers, which might be counted. as any part of the universe, not just conscious observers. The tree that falls in the forest has implications for the creepers and mushrooms, no matter whether anyone failed to hear the crash, or thought the crash was thunder. The atom that decayed in the box tripped the hammer that broke the cyanide vial whether the cat was there or not. And the independent viewer looking through the glass back of the box would see what happened even if the front of the box prevented other viewers from seeing anything.

And relationships are fundamentally dependent on and comprise information. One even could define relationships as being information themselves, but I’ll not get into arguing about that.

Generalisation, Reductionism, Reductionistic fallacies

Everything should be made as simple as possible, but not simpler.
Albert Einstein, as paraphrased by Roger Sessions

In some circles "reductionism", with its twelve letters, is regarded as three times as bad as a four‑letter word: at least the traditional four‑letter words have lost much of their power and value lately, by being adopted into common use. Reductionism in contrast, still is poorly understood by most people, and accordingly remains a term of abuse among naïve critics. For one thing the word gets applied in various senses to a variety of not‑very‑closely related concepts, and most people who use the term fall abjectly into the trap of their own confusion. Some of the concepts are mutually independent, some mutually inconsistent, and some are blatantly fallacious.

None the less, without reductionism of sorts, the very idea of science could hardly be meaningful; in fact any claim to understanding anything concerning the “real world” would be futile. Take anything macroscopic and consider it in its relationship to the universe, and there is no practical limit to what there is to know about it; you can't know everything about anything. This implies that trying to investigate or even speak of it without simplification is impossible.

"Simplification"??? In science???

Yes, certainly. In this sense simplification means leaving out whatever you can leave out without thereby talking nonsense or otherwise producing nonsense.

In science we have the concept of parsimony:

A principle that has been variously attributed, is that the supreme goal of the construction of a theory is to make its elements as simple and as few as possible, but no simpler, nor fewer.

Now, various schools of thought that have names such as "holism", or incline to use words such as "holistic", correctly point out in their various ways, that leaving out the wrong parts invariably lets in error and nonsense. Many such schools would like to regard the omission or neglect of any part of anything as a defeat or betrayal. Such holism is tempting, because, demonstrably, not only is the whole more than the sum of its parts, but in one sense or another, every whole is part of a greater whole, all the way up to the entire observable universe.

Observations of that sort commonly lead to a type of defeatism, in which the conclusion is that we cannot ever understand anything at all, because to understand anything you need to understand everything.

In this conclusion defeatists conclude too much and understand too little.

For one thing, as I shall demonstrate, not even the whole observable or inferable universe, what we might call: "nature", either "understands" or even "knows" or has information about, everything in its own self, and yet things all keep rolling along, so it does not follow that perfect knowledge or understanding or consistency, conscious or otherwise, is necessary to realism or reality.

Fortunately our universe seems to be so constituted that functional, "rational" simplification is in fact possible and indeed is a practical option, certainly for most purposes. One consequence of this option is that we do not have to consider everything about anything before it becomes possible for us to make certain classes of meaningful statements about any particular thing. Our assertions might be uncertain, probabilistic, wrong, or even meaningless, but may be useful all the same. As Babbage put it, though possibly too optimistically: "Errors using inadequate data are much less than those using no data at all."

This is one part of the vital intellectual tool that one may call “reduction” or "reductionism". Commonly, meaningful reductionistic statements intrinsically have much in common with generalisations. Generalisation is another evil word, according to the unthinking horde, and yet it is a component essential to all human understanding.

In essence what generalisation means is that by leaving out certain classes of facts about some things, one can draw conclusions that apply to all of those things together, instead of having to make separate statements about each one every time. I might say: “humans are animals”, or “our lunch vegetables are in the pot”, it does not rob the statements of meaning if I omit to mention that some particular human is called Pat and another has a sore toe and that those things are not true of all humans. Similarly, I need not say that some vegetables in the pot are green and some red, or that there exist vegetables that are not in that pot, or that the pot contains meat as well and is cooking a stew that is not yet done. Those are not points that need be discussed in reply to a question such as “What happened to the vegetables?”

That kind of thing, in which we enable ourselves to draw conclusions and convey information without trying to cover matters that are not relevant to the current concern, is generalisation. Failure to generalise competently is likely to lead to conveying unwanted information, and in information theory, unwanted information interferes with the processing of required information, or “signal”. We call unwanted information “noise”, and the more noise, the harder to deal with the signal. An essential concept in information theory is the “signal to noise ratio”.

Unfortunately it also is possible, indeed easy, to fall into the trap of forgetting, or failing to realise, what one has left out. If I say “Humans are nothing but animals” and. “Pigs are nothing but animals” then my wording implies (erroneously) that humans are pigs and pigs are humans because there is nothing to distinguish them. In fact, pigs have many attributes, or combinations of attributes, that no animal other than pigs could share; any animals that could not share those, accordingly could not be pigs, and humans too, have many attributes that no other animal shares, not even pigs, or either we could not be humans or those “others” must be humans too.

That is why this use of the expression “nothing but” constitutes a fallacy so important that it has its own name: the reductionistic fallacy. That term is most beloved of lazy thinkers, most of whom have no idea of what reductionism is, whether valid or otherwise. They thereby make themselves victims of their own reductionistic fallacy.

Such fallacies are dismayingly common among people with no better excuse than smugness, but there is another, more insidious, form. When most people speak of reductionism they think of reducing the universe of discourse to the simple items while ignoring the combinatorial complexities of the entire system — the whole being more than the sum of its parts. That certainly sounds bad.

However, in retreating from reductionism in that sense, they back into basic reductionistic unsoundness of the opposite kind. Concentration on the big picture seems easy and obvious when the big picture is the easy one to see (it often is not!) It tempts those who have not done their reductionistic homework, into smugly ignoring the details of the system.

How small‑minded of me!  Fancy my niggling about the details when the glory of the full system stands out, inviting, commanding, one's attention! 

All the same, by leaving out those details one is being far more erroneously reductionistic than by concentrating more obsessively on the details than on the big picture. At least by concentrating on the details one is in a better position to understand the things one is working on. Conversely, if one reductionistically omits the basics, one guarantees missing important foundations to the system; foundations that one is at best ill-equipped to understand.   

One becomes like the innocents who abuse the filling station attendant when there is no more fuel to be had: why does the malicious oaf not fill up my vehicle?  Why doesn’t he stop nattering on about technical details about his tanks and stuff? All I am asking him to do is to stick the spigot into the hole, grip the handle and fill ‘er up!  Why should I care about his tanks and things?

There we have true largeness of spirit: no reductionistic concentration on details such as how it comes about that commonly one does get fuel out of the pump! 

Comparing mythologies, the rival errors of rival errers, tends to be unproductive. And yet, while I hold no brief for either, if I must take sides in this battle of the reductionists, I am inclined to prefer those reductionists who at least know enough about little enough to know what they are talking about.

I already have mentioned the concepts of “top‑down” and “bottom‑up”, either of which is important when used correctly, and disastrous when abused. And both can be seen in terms of reduction versus holism.

Reduction and generalisation are important at all levels, but they are most striking when they lead to changes of type, at changes of scale. Commonly it is at such stages that they also are most inclined to tempt us into fallacies, especially into various forms of oversimplification. Sometimes the originator of such a fallacy was not personally misled by it, only intending it as convenient imagery, but the first publication of the concept led to public confusion and illusion. Take for example Rutherford’s announcement that atomic matter is “mostly empty space”; public reactions ranged from mystical pseudo‑philosophical pronouncements, to commonsense dismissal of scientists as not being sane.

Only a minority were equipped to make sense of it in context. In fairness, the concept is a lot more difficult than it seems in everyday English; the very definition of “empty space” is commonly taken for granted, and yet it might be fundamentally meaningless. Certainly what seems to a neutrino like empty space, might well look like a block of granite or a planet, either to an electron or to your spaceship.

Such fine distinctions work out very differently for say, schoolroom Newtonian physics, or General relativity, or for Quantum Theory. In each case there is a great deal of possibly unspoken reductionism that works out differently in each body of theory. Each in turn is internally robust, but their generalisations tend to unravel as one approaches problem areas.

This is not a criticism from the point of view of application of the bodies of theory in their own fields, but it does mean that the lay public should be very careful in drawing conclusions when things get tricky. It is similarly difficult for the physicists when they approach the boundaries of their theories and experiments. In such regions of theory and research, emergent effects play skittles with theories. What the research workers can be confident of, is that we still have a great need for more breakthroughs.

 

Existence, Assuming it Exists.

My adversary's argument
is not alone malevolent
but ignorant to boot.
He hasn't even got the sense
to state his so-called evidence
in terms I can refute.

Piet Hein, “The Untenable Argument”

I have encountered intelligent, educated people who deny existence, though I never have encountered one who was able to explain that reasoning satisfactorily, let alone cogently. The best I have heard is that “existing” is not an attribute of an entity and that “to exist” is not an activity.

That sounds well, even trenchant, but I am not satisfied that the argument is valid or, if valid, that it is useful, let alone correct — even grammatically. Whether a concept is dealt with in a given language or not, and if it is expressible, then whether it is to be expressed as a particular expression, noun, verb, or interjection, need not reflect on its logical content or validity. 

It seems to me that such objection to verbs such as "exist" arise largely from failure to recognise the respective natures of stative and dynamic verbs. Most of us, most of the time, are comfortable with dynamic verbs because they plainly are "doing" words. So they do not complain about "dig" as a verb because they can see "digging" as a present continuous dynamic activity, and they use the participle form for a "digging dog" because they can see things happening.

However, languages and perceptions deal with stative verbs as well, verbs such as in "This stone weighs too much for me to lift". "That canal joins the oceans." and "I exist to confound the deniers." are examples of verbs of a type that is widely recognised among extant languages, and recognised for good reasons. 

And many verbs may be stative in some contexts, and dynamic in others. "I live here." "You call that living?"

And to ignore or forbid such attributes of languages seems to me evocative of aspects of E-prime; which amounts to English shorn of all forms of the verb "to be". I regard the idea as an arbitrary whim, a waste of time and effort, and unrewarding. Interested readers may explore the concept, starting with the article in Wikipedia. For my part I have more entertaining concepts in which to invest my time.

What is more rewarding in this discussion is how our language treats existence words, so till further notice I happily exploit the semantic convenience of the existing convention — or the established convention, if you object to my begging the principle. And that principle is what I shall exploit anyway. English verbs are simple compared to those of some other languages but, to put it crudely, they can refer to events and states as well as actions.

And in at least the languages that I have been able to examine, each had an infinitive verb meaning "to exist". Such languages enable the speaker to refer to events, states, and actions as if they were entities, much as in English.

And in my opinion all such things comfortably accommodate the concept of the concept of the existence of existence, including "exist" as a verb. For my part, those who disagree and deny that they and their existence exist, may do so in good health, because I can comfortably ignore whatever does not exist.

Elsewhere.

Still, to express exactly what might amount to existence, or define existence, remains a tricky conceptual problem; there are about half a dozen major lines of thought on the matter, plus umpteen minor variants, each with cohorts of partisans squabbling about the details. This is not the place to debate those details, and as I see it, the discussions that I have read are mutually inconsistent to the point of incoherence. They are full of question begging about such items as: which properties or attributes are intrinsic or extrinsic, what constitutes a property, or what constitutes an individual or an object, or what their various opponents said, or should have said in terms anyone could refute.

As a rule such discussions do not even consider the inconsistency of asserting whether one can step into the same river twice, while at the same time they assume that the person stepping into that notionally second river, is the same as the person who stepped into the first, un‑stepped‑into river.

As I already have pointed out, entities have boundaries in time as well as space, and not necessarily sharply defined boundaries. An entity can be an evolving state, such as a cloud, a river, a species, or a human, rather than a statically defined object, such as...

Well, such as what? To present any inarguable example of a statically defined empirical object is not easy; I am not so much as sure whether it is at all meaningful. Not even rivers, diamonds, rocks, or stars or George Washington's axe, are forever. And a lot of the debate seems to ignore such concepts as the limits to identity.

So forgive me for unregenerately ignoring such topics, not because there is nothing substantial to them in themselves, but because there is nothing substantial to the views on which so many writers have expended spittle and ink.

For my part, seeing that this is my essay, I accept the existence of existence and naïvely assign to its reality a criterion of sorts. Anyone dissatisfied with that is welcome to my sympathy, but I do not expect his satisfaction to approach the top of my attention stack in the near future.

Some aspects of the concept I already have hinted at. It seems to me that whatever exists, even for a while, must be an entity, commonly a complex of entities at that, and even things that don’t exist although their concepts exist as entities, such as the concept of ten‑tonne green swans, or the number two to the power of its own reciprocal, or odd perfect numbers smaller than 1000000. Even if they never existed before, such concepts do exist now that I have conceived and recorded them; existing not as themselves (there definitely is no such swan, nor any such number) but as the information it takes to represent their specification.

You might wonder why I bother with anything so trivial, but it leads to questions such as whether numbers exist at all, and if so, in what sense they could exist? I am inclined to argue that in some senses they do not, but that is a complex matter in its own right, so let it pass for now.

More relevant to us in this discussion, is the question of what it means for a material entity such as a cloud, a crowd, a species, an ocean, a nation, a molecule, an electron, a tune, or a diamond, to exist in the naïve sense.

I propose that it does exist, at least if its presence or influence affects the course of events in some ways differently from how its absence would affect the course of events. Let us consider an example.

Charitably assuming that I exist, suppose that I walk into my bedroom in the dark. It is my intention to retrieve the book I left on my bedside table. I step on something that rolls underfoot and I crash down painfully on my coccyx. I get up and step forward and bump my nose equally painfully. At that point my subjective sensations fully persuade me of my own existence. A real “cogito ergo sum” moment. Or perhaps “sentio ergo sum”.

Nothing feels as much like feeling to persuade one that one does exist!

I also deduce that something round on the floor had existed where it had rolled underfoot; never mind cogito ergo sum: Si impediat, existit! If it impedes, it exists.

A dropped pencil perhaps? Falling had disoriented me so that on rising I then had bumped my nose against an also existing wall, cupboard, or the like.

Further experiments could locate the light switch and gain me more information, but the principle already should be clear: if something exists in the sense of being “real”, it has consequences that differ in some ways from the consequences of some alternative thing (including the possibility of the absence of any thing) being real in its stead.

Consequences? What might that mean? In everyday terms it generally would be something observable in principle, some event or combination of events, some object or objects. In submicroscopic terms, where quantum considerations come into play, that trivially might affect the probability of one event rather than another, instead of its definite occurrence. Many quantum effects depend on probabilities rather than specific consequences. Examples include the passing or reflection of a photon by a semi‑reflective mirror or polariser.

“Existence” defined in such terms might be seen as being causally significant (not deterministically significant, because I reject determinism. But causal significance would mean that the existence of any entity would change some probability somewhere).

This might seem very academic, but it is the basis of many thoroughly material effects. (In fact, at the quantum level, conceptually all effects.) Consider a practical example: in a nuclear reactor energy is produced by the splitting of fissionable atomic nuclei (usually of particular uranium or plutonium isotopes). In splitting they give off some neutrons, and there is a probability that some of those neutrons will hit another nucleus at the right speed either to split it to release more energy in a chain reaction, or stick to that nucleus and change its mass and neutron number.

By adjusting the probability of these events, we can adjust the heat that the reactor gives off, to match the power that the users desire. Among the atoms in the fuel of say, an HTR (High Temperature Reactor) there will be some uranium‑238, which in itself is too inert to have much effect on the reaction; the probability of a neutron hitting a U‑238 atom hard enough to be absorbed is too low to count for much, so the neutrons go bouncing around till they decay or can cause another fuel nucleus to split.

But suppose the reactor begins to overheat; then the hot U‑238 atoms themselves bounce around much harder, which causes some of the collisions with the neutrons to become much harder, more energetic. This increases the probability that the neutron gets absorbed, so that it cannot cause any immediate splitting. In turn this slows down the chain reaction automatically — a highly material and important consequence of the adjustment of probabilities.

All by managing the probability of particular quantum reactions; cause and effect of the existence of particular entities can be subtle and, in my view, often beautiful.

If a given entity has no existence, then whatever does exist in its stead must have consequences that differ in some way from its absence. In that dark room, if the wall or something equally solid had not existed there I would not have bumped my nose, even if my nose existed. If there had been no similarly round object on the floor, I probably would not have fallen in the first place. If something flat had been there, say a booklet, a thing that, unlike a pencil, would not roll, I still would have been affected, but in other ways. Even if that same existing pencil had been there, in a slightly different place or orientation, things might have happened differently. The consequences of such items can be interrogated for their information and its implications for what exists, in what form and place.

To perform such interrogation is to make use of consequences in measurement. I feel around in the dark for the wall say, and locating it gives me the measurement I need if I am to know where not to put my nose.

Similarly, if the U‑238 was too cool to absorb the neutrons with sufficient probability to interfere with the reactions, then the chain reaction would go faster and hotter, but if sufficient heat of reaction existed in the reactor, the probability of absorption would reduce the production of unwanted heat: negative feedback.

Subtler examples with more obscure or extreme outcomes are easy to multiply. For example, consider a meteoroid in interstellar space. Suppose it has a diameter of some 20 kilometres, a mass of some ten trillion tonnes, and a trajectory that, all other things being equal, would strike the planet Earth some 66 million years ago. Suppose its impact caused the K/T event that ended the reign of the dinosaurs, eventually leading to the presidency of Trump, that in turn established the incorporation of America into the Chinese empire.

But how did that meteoroid’s journey begin? Perhaps in a planetary collision some 6000000000 years ago, many light years away, perhaps in another galaxy. How small an influence could in our imagination have prevented its striking Earth? 

Imagine a single uranium atom in free space, remnant of a neutron star collision. U238 has an enormously long fission half‑life but this particular atom happens to fission spontaneously at a suitable time, producing, as it happens, a highly accelerated barium atom. These things do happen in uranium, and like all similar events in quantum interactions, such fission is not deterministic as far as we can tell, nor predictable, not even in principle.

A few years later that speeding barium atom strikes a microscopic chip of silica that in turn deflects a milligram grain of iron, that a few thousand years later drifts in turn into the path of our meteoroid, minutely affecting its trajectory and its spin. Had that uranium fission been a microsecond earlier or later, the meteoroid would have struck Earth on schedule, though of course all sorts of things would have happened to it on its four‑billion‑year journey, all of them in the first couple of billion years necessary to direct it on its way to Earth, so in fact, as a result, Trump gets elected anyway and China celebrates.

But suppose that little grain of iron affected the meteoroid’s momentum just enough to change its trajectory by some fraction of one trillionth part. Suppose that event microscopically changes the meteoroid's passage in slingshotting past some massive body a few million years later, and some five billion years after, and after some large number of intervening events, our meteoroid skips off the atmosphere of Earth and passes fairly harmlessly on through space. The dinosaurs survive and the mammals remain subordinate along with other animals such as frogs.

Sixty million years after that, a fat, bald, orange lizard with a comb‑over yellow crest gets voted into the presidency of a leading world power, and starts a nuclear war that blasts all higher forms of life off the planet, and democracy triumphs again.

If such a thing could happen, and we magically could know about it, we would say that that uranium atom, and its choice of instant and direction in which to fission, had existed in terms of our definition. That did not happen, so we exist instead of that lizard.

That was a sizeable consequence for a single splitting atom.

Or even for two atoms.

Because, suppose that if the first uranium atom had in fact split a second later and had no measurable effect on the meteoroid’s path, then the meteoroid might have passed close to another quantum event that it otherwise would have missed, say from a vagrant Thorium atom, and that this one caused it to proceed on schedule.

We should then be unable to allocate a unique minimal cause to either the collision, or non-collision with the planet; this being an example of underdetermination.

You will recognise this as just another example of the webs and chains of cause that we dealt with earlier. Precisely how we would become aware of such an event or train of events is a separate question, and non‑essential, because we do not demand that we in particular must be aware of every event or probability or existence in nature, whether in our history or pre-history. One thing we can be sure of however, is that untold examples, equally trivial in themselves, and equally drastic effects, have happened throughout our past, and continue to happen now.

Anyway, you might like to wonder about how significant such thoughts might be to the question of what ‘exist’ might mean, and what the questions of implication in physical algebra might suggest for the concept of existence or nonexistence.

Investigations need not tell or even determine all the details of the nature of the entity’s existence, but often they can suggest the existence of something somewhere. And in principle the more closely that something can be interrogated, the more information about it can be determined, progressively excluding more and more alternatives.

Indestructible Information

The boundaries that Napoleon drew have been effaced;
the kingdoms that he set up have disappeared.
But all the armies and statecraft of Europe
cannot unsay what you have said.

Ambrose Bierce

It is a commonplace that information cannot be destroyed. That could be interpreted in various ways, but one way is in the assumption that every consequence or outcome of something existing or happening will change something in the universe. That change in turn, if it may be said to exist in its own right, will change something else, and so on, commonly with changes of increasing complexity as time goes by.

In general, no operation of the physical algebra of our universe will ever decrease entropy except trivially, temporarily, or locally, but not globally.

We have seen that the choice of the instant that a single radioactive atom happens to decay, can affect the history of life on a planet, or even the continued survival of the planet itself; could we imagine an event so small as unable to affect contemporary human history?

I say it is difficult. Very difficult indeed. Some people say that no single person can affect history, and that great events require great movements to direct or change them. Tolstoy was one of those with such views. His view I deny and dismiss categorically, except for trivially local and short‑term events. If Szilard had stepped under a bus at the moment of conceiving fission chain reactions, Hitler or Stalin might have got the Bomb instead of the US. If so, the history of the second half of the 20th century and all our currently foreseeable future history certainly would have been drastically different.

I argue that a world without Bach or Newton or Hitler or Einstein would have been different too, both qualitatively and quantitatively. After a few generations, if we had some clairvoyant means of comparing the possible alternatives, we could hardly recognise the human situation on the planet. Now what would it have taken to prevent their birth?

Very, very little.

Suppose that on the evening that Hitler was conceived, his father lit his pipe as usual, but that a match failed to strike smoothly and it took an extra strike to light his pipe. If he noticed the incident at all, Herr Hitler very likely would have forgotten it in less than a minute, and anyway, it certainly would be too trivial to affect human history, right?

Maybe, but it could have made him shift in his chair, perhaps unnoticeably.

An hour or two later in bed, not the same sperm that would lead to Adolf, but another sperm ten microns to one side, was the one among tens of millions of rivals, that got through to fertilise the ovum. Its DNA certainly would have differed from the DNA in the sperm that won the race in our world line. The Adolf Hitler known and loved by all would never have been born; the sibling actually born certainly would have been different; might even have turned out to be a daughter. That child in turn would have done all sorts of things more significant than one extra strike of one match. The doings of those different children in their turn would have caused the birth of hundreds or thousands of children other than the children that actually were born during Hitler's lifetime. Some better no doubt, some worse. Within a century or so after the double match strike, not a single human on the planet might be one who, in the event, actually was born, and if there were one, that one necessarily would have done things different from what instead got done. Whether the resulting world would have been any happier or more miserable than ours, we never can know.

But such potential tiny differences always could have had big, big consequences.

Franz Joseph Haydn having been unborn in similar circumstances would never have written the music that later was appropriated for the German national anthem. For Tolstoy’s siblings to have written “War and Peace” would have been unlikely, and depriving the world of “War and Peace” would have changed the world more dramatically, if not more drastically, than the failure to strike a match at the first stroke.

Never mind monstrous macroscopic objects such as matches; any of these people could have been born different as the result of a single phosphene caused by a single quantum event in any one parent’s eye, say from a cosmic ray produced from an event tens of millions of light years away, at the height of the age of the dinosaurs.

And as things stand, we live on a planet that has been deprived of millions of people greater than any we have known, greater than Newton, Maxwell, Bach, Alexander, Caesar — anyone you care to mention, all because of events far smaller than that double match strike, and at the same time we have been afflicted with the appalling stupidity and cruelty that might have arisen instead of greatness.

Of course, we can be sure that we have missed more fools and deadly tyrants than heroes, but we have no shortage of those in our current world anyway, so that is not much consolation.

Be all that as it may, you will recognise these as examples of the webs or chains of cause and outcome that we dealt with earlier in this essay.

And anyway again, we live in a chaotic universe; it fundamentally is not possible in general to predict the detailed long‑term effect of the smallest event, as long as the relevant light cones overlap sufficiently.

Speaking empirically, we live in a world of physical implication that produces information by the development or succession of events. Whether “production” of information can imply creation or destruction of information, is another matter. One common view is that it cannot, but let’s not discuss that here and now.

The immediate point remains: whatever physically exists (including information) intrinsically has consequences, and what does not exist has none, or at the least, its non‑existence, that is to say the presence of what does exist in its stead, has consequences; different consequences. It is a vague point, poorly defined, but a good place to start, in trying to define existence. If we accept it, it makes nonsense of Schroedinger’s cat dilemma.

And anyway, I like cats.

There are important implications to this line of thought. Not only does humanity have no G‑E‑V, but no G‑E‑V seems possible even in principle: certainly no G‑E‑V is possible to us and none to nature either, not as we understand any related concepts. We only can observe anything through the physical consequences of physical events, in other words, the implications of those events.

As far as we can tell, the cause and course of any event are controlled only by what information affects its circumstances, and because information is more limited than the theoretically possible ranges of outcomes of events, the courses and outcomes of events are not fully determined, but are partly random: they are “underdetermined” as the term is. In fact they are never at any stage fully “determined”. Even after the event the outcome is not fully played out before it has made its contribution to all the events that it possibly could affect directly in turn. And those events in turn have no clear limit to their own ultimate consequences.

If it is indeed true that information cannot be destroyed, then that is true only in a special sense. A quantitative sense if you like, suggesting that the universe never grows simpler. And I am suspicious even of that. But qualitative information certainly can be lost indefinitely; even if, in wiping out a poem composed on a slate, one does not decrease the physical information in that system, the information now embodied in the medium certainly no longer conveys the same message.

It might be easier to envisage the principle if you think instead of a board on which the poem had been composed in Scrabble letters. If we scramble the letters the new arrangement of letters, whether intelligible or not, would still contain the same amount of information. However, that original message would be gone forever. To retain the original message would require that a separate copy had been taken before the scrambling.

We see therefore that it is possible to change the universe by changing information in such a way that, though our universe has not become any simpler in consequence, and does not contain any less information, it never again will yield up particular information that once had existed.

Imagine for example a super‑Einstein sitting on a wide, deserted sandy beach at low tide. He suddenly stumbles on a completely new line of thought accidentally initiated by a trick of light. Hastily he writes out the derivation of a TOE, a Theory of Everything. Not having paper with him, he scratches the TOE into the smooth surface of the beach. It only takes him several square metres of beach sand.

Hastily he turns and runs for his rural hotel, intending to photograph the text before it is lost. Dashing across the road all unmindful, he gets squashed flat by a bus. No one knows of his work, which eventually gets wiped out by the rising tide.

Now, none of that, however sad, suggests either loss or much creation of physical information in the sense I already have discussed, but what is certain is that however little difference it made to the behaviour of the tide or the flattened skull of the dead genius, that TOE is gone and remains unsuspected for say, the next few thousand years. And when it is rediscovered, that does NOT happen by an ebbing tide recreating the information of the work that our squashed genius once had written in the sand.

In fact, if by some miracle some tide did leave some legible script somewhere in the sand, it would stand hardly any chance of being meaningful or in the right language, and if it did, it would be likelier to spell the lyrics of the song “Love Letters in the Sand”, than the several square metres of the TOE.

And if the tide did reproduce the TOE, the chances of a passer‑by noticing it and recognising its message, let alone recording it, would be negligible.

Of course, I am being a little unfair, given that the nature of ebb‑tide waves is not suited to producing legible script at all, let alone anything useful. But the same principle would have applied if the formulae had been produced by someone scrambling pebbles on a board, or by chickens scratching in the sand for seeds.

Actually, though what I just wrote remains valid, the meaningful information expressed in the script is not in principle lost to the universe immediately. An aircraft passing overhead could have photographed it a few microseconds after the photons left the beach, and in principle, light that passed the aircraft could still carry the same message upwards into space for as far as optics could in principle resolve the message. Those travelling photons are a special case of information storage. The light in flight might be regarded as a sort of delay-line storage mechanism.

But the cameraman would have to hurry. No reasonable photographic equipment could carry the message recognisably even a few light seconds into the sky. The signal would get lost in quantum noise, probably in less than a light minute (the sun being some eight light minutes away from Earth).

It is easy to multiply such examples. Think of a message or a work of art written in soluble ink on a slab of sugar. In simple physical terms you lose no information if you dissolve that sugar in hot water, but though you were to wait till the sun cooled, that message or art work never would reappear from that solution whether the sugar crystallised out, collecting the pigment as it did so, or not.

And if it did reappear, and its significance were to be recognised, no one would be able to tell that it had existed once before, perhaps trillions of years before.

Furthermore, if such a message did crystallise readably at any point in the future, there would be no way to tell whether it was the same as some message that had been written there before or not. The chances would be hugely against its even vaguely resembling any particular message written before or at all.

The information written in such volatile media patently existed as one or more entities as long as it was not erased in such a manner: in suitable circumstances it might build or destroy whole civilisations. Once erased it could do nothing in particular except in the negative sense of causing something to happen, other than its presence once could have caused. Ironically, it might not even be the original or intended message, or it might not have had any intention at all. For example, another genius catching a brief glimpse of part of such a message might misread it and create a totally unconnected, equally influential, message, but with an effect substantially different or even opposite. An artist might be inspired by the vanishing glimpse of the picture, and create something equally great, but quite different.

Nothing of any such kind would be a good bet, but it still would be possible in principle.

The original entity might not have been any intended message at all: an artist might break open a lump of malachite and be confronted for just one second with a pattern of shapes and shades that shakes him so passionately that he drops it and it shatters. He tries to recreate it mentally, so as to copy it, but fails drastically. Instead he does his best to recapture his vision, but even if what he then produces is his finest work ever, he never knows how true it is as a copy. Certainly, whatever he produces, insofar as it resembles the original, was not information originally formulated by any subjective mind.

But that information in each case undeniably existed in the sense of causing things to happen that would not otherwise have happened, and by the same test, stopped existing once disrupted to the point that no such entity was once again distinguishable.

There again the test for existence in any particular sense is what the entity can cause: what its effect would be on the web of causes and events. The ink that got dissolved certainly still exists, if only as its molecules or even atoms, just as the sand that got scratched into the representation of the TOE theory still existed after the tide came in. But the semantic message content? Suppose the ink settled down again and miraculously produced a legible message again. Physically this is hypothetically possible, though not to be expected in several lifetimes of our observable universe; however the chances of its having content with the same effect as the original are incomprehensibly improbable.

Nor is the principle limited to writing or graphic art. Entities created from other entities could take the form of Rupert Brooke’s “keen unpassioned beauty of a great machine”. Having seen a poem or seen a construction such as a machine, one might be able to recreate it. Many inventions of biological brains are repeatedly presented at patent offices around the world, some resembling others, and some unexpectedly different. To be sure, plagiarism is nothing unusual, but the same is true of convergent originality. Someone has referred to the effect as: “the congruence of great minds”.

Much the same is common in nature, in which biological evolution repeatedly produces mechanisms and structures that resemble each other positively eerily. The effect is so marked that it has its own technical term: convergent evolution.

Convergent evolution is marvellous, repeatedly, beautifully marvellous, but it is not miraculous: given circumstances offer similar or analogous evolutionary opportunities to similar or analogous structures in different organisms, so there is nothing mysterious, however breathtaking, about the outcome. The reason is what we might call causal structure: the structure of the causal circumstances “causes” the form of the outcome.

One might as well be astounded at the repetitively marvellous formation of repetitively marvellous glass shards when we break glass, or the complex routes that streams trace in eroding their beds down a mountainside.  

Think again of a species. Kill every reproductive specimen, and it is extinct.

But suppose a genetic engineer had captured its full genome and sequenced it. Suppose he had stored the information in a computer's storage medium. If so, is the species still extinct? If he is sufficiently advanced or the species sufficiently simple, he could recreate the living species from his recorded data. Well, suppose that he records the data in a hologram in a block of solid silica that could last for millions of years. Unfortunately he ships the block overseas, and it falls into the deep sea and is buried in a kilometre of ooze.

Now is the species extinct? Potentially, but we never can know whether some future palaeontologist, whether from Earth’s future or from an alien planet, would stumble on the block and re‑create that species.

 

Existence of Entities in Formal systems.

Euclid taught me that without assumptions there is no proof.
Therefore, in any argument, examine the assumptions.
Eric Temple Bell

This is a topic of such virulent disagreement that I have no hope of imposing any definitive views on anyone. The best I can do is ask any serious reader to ignore his own views and accept mine for the context of this essay. I do try to present them cogently, but entire philosophical systems have been founded on conflicting views of the matter.

Let us begin with Plato’s cave and the idea of the existence of ideals. I have little patience with them, and refer interested readers to Wikipedia articles on the topics.

Nor am I happy with Russell’s definition of numbers in the form of “three being the set of all sets that have three elements”. As it happens I far prefer the likes of Conway’s Surreal Numbers, but never mind that. I have no quarrel with the construction of number concepts, or for that matter any other formal concepts by way of formal axioms and theorems derived from them, but when it comes to defining the concept of the “existence” of such concepts, we need to examine the concept of existence for latent ambiguity and the risks of inappropriate contexts.

So, when we say “there exists an integer X such that 2<X<8” we are happy to accept that integer as being entailed by the axioms of integer arithmetic plus any necessary intermediate theorems. We also might be happy with “there exists no integer X such that 2>X>8”.

Slightly trickier would be the question of whether this has any meaning in our physical world (the purely formal mathematician might not care, but that does not imply that their indifference disqualifies the question from anyone else’s consideration).

And in fact, there are some material implications. Every axiom, theorem, proposition, sentence, symbol, operation, or derivation in such a system, only can be demonstrated to exist in the form of information. With no information it is not meaningful to speak of any entity at all, because without information no entity could be distinguished from any other entity, in particular by its effect on the causal web of events.

In a statement describing integer X: 2<X<4 for example, the information was sufficient to determine X=3 uniquely. If it had stated that 2<X<7 then at least two bits more of information would be required to distinguish X within the set 3, 4, 5, and 6.

Information is physical and refers to physical things, and the significance of any particular information resides in the distinction between alternative physical states, realities, or anything similar.

And no formal concept, axiom, or theorem can be instanced, communicated, derived, stored, or operated on without some physical manifestation, whether of mass or energy, even if only in the form of photons in space or sound waves in matter, ink on paper, toner on a drum, or states in a brain.

So I assert that in that sense to begin with, formal disciplines are subject to physical constraints. No formal relationship or object can exist in the absence of information. In fact, even formal errors such as “X=(X+1)” in standard arithmetic require information for their manifestation or existence.

I do not state this as a formal axiom, please note, but I invite counter‑examples. It leaves us with thorny questions of the distinctions between truth, error, and existence. A truth value, whether formal or applied, may be TRUE or FALSE in a binary universe of values, but in larger ranges of values it also could be MEANINGLESS or UNDETERMINED. Consider for examples:

X=X,
1=0,
=+=, and
X=Y.

The facile reply, that it is meaningless to suggest that something can be true without existing, cannot be trusted: since it patently follows that if a true statement or fact can exist, then in a similar sense its negation, or its various possible misrepresentations, can (must?) exist as well. In fact one could make a persuasive case for the assertion that vastly more expressible statements, or structures of signs, are untrue or meaningless, than are true. In particular, it also is true that the number of possible statements, true or false, unique, distinct, or synonymous, is finite. If that were not true, then one could argue that true and false statements were equally numerous.

And not only in politics, courts and businesses.

A common reaction is that this is nonsense because every theorem in every formal axiomatic structure is entailed by other theorems or axioms, and therefore is true (otherwise it is not a theorem), but there are at least two difficulties. In context the less interesting difficulty is the principle that some true statements are Gödel‑undecidable, as established by Gödel’s first incompleteness theorem; it might be interesting to speculate on what the smallest intelligible Gödel‑undecidable statement in simple numerical algebra might be, but let that pass for now.

A more interesting difficulty might be to ask how in the sense I am discussing here, any Gödel‑accessible theorem might be proved, preferably by the shortest possible route of formal derivation. Assume that derivation occurs by achieving each new step by algebraic operations on the results of one or more earlier steps, in other words, on axioms or theorems.

If you know of a better way, please present proofs or examples of its validity.

Now, the result of each valid step, each operation, in a proof is necessarily a statement in some relevant notation within a relevant medium and convention. It might be an operation such as addition, negation, inspection, comparison, or enumeration, or it could be an assertion such as that a given identified value differs from some other value. Any of those implies relationships, and accordingly that the statement has information content and requires that information to be supplied; one cannot provide random noise as proof or derivation (except perhaps as the logical equivalent of “which is absurd”, but I am uncertain even of that possibility). Furthermore, the application of each operation is a special case of information processing, which intrinsically involves energy and entropy.

The implication is that implication is a physical process or relationship. Without physics there can be no statement, no derivation, no storage of information. And arguably, no existence in the sense that existence implies, depends on, imposes, consequences.

Still, there is at least one more hiding place for the existence of formal reality: physical relationships. As long as physical things can happen in threes or minus thirty threes, or (roughly) in pi’s or tau’s or e’s, such numbers might be said to “exist” in various senses. That is certainly reasonable, because each such relationship involves masses or energy levels or their states and coordinates, and those in turn demand energy and changes in entropy to change.

As I see it, such things are about as real as anything can get, and I am not inclined to debate the niceties.

Well then, whatever we can write down, or derive from axioms, or observe by counting or measurement, or can be taken to be embodied in some physical entity or situation, can be taken as existing, as being real in some sense, if you like. One could for example demonstrate the reality of the number 1729 either as a row of 1728 bolts plus a pencil, or a cube of 10*10*10 blocks of wood plus nine rhombi of 9*9 pennies.

But what about numbers one cannot represent in some such manner?

Let us consider an academic example. The first part of the decimal representation of pi is 3.1415926535897932. Let us express that by saying that the numbers 14159 and. 35897 occur “in the decimal expansion of pi”. Now, those two happen to be primes, each expressed as a prime number of significant decimal digits and we can say that they are the first two five‑digit primes that occur in pi.

This is not of intrinsic mathematical interest, please note, just an illustration to clarify our terminology. There is nothing special about this; we expect both numbers to appear an indefinite number of times: for example, the number 14159 occurs again after the digit in position 6954. The prime 35897 also recurs, some 209390 decimal digits after its first occurrence.

Well then. To the best of our available evidence, the sequence of the decimal digits of pi are of maximal entropy, which implies that they are truly random in every sense other than their occurrence in pi, so that every possible sequence of digits of any given length will occur somewhere. I do not assert this, but will assume it as a matter of convenience. It is just an example, right?

Now imagine taking say, the first googol‑digit prime in pi (if you prefer to look for the first googolplex‑digit prime, suit yourself; it will not affect the argument). Add to that prime the following single digit of pi. For example, if we did this with 14159, we should add 2, that being the next digit in pi, giving 14161. Or if we did it with 35897 we should add 9, giving 35906.

But we are considering not 5‑digit, but googol‑digit primes. Call this first googol‑digit sum your starter. Then continue looking for the next googol examples of such googol‑digit primes. Each new prime you find, you raise to the power of the current value of the starter, then add the next digit of pi, and that becomes your new starter.

Call the final number you get p.

Now, of all the insane, pointless numbers I could have chosen, p just about has to take the cake.

That is exactly why I have chosen it.

It does however have several points of interest — not mathematical interest perhaps, and as far as I can tell, nothing even of arithmetic interest, but of conceptual interest. Let us consider some of them.

Pointlessness has nothing to do with the “existence” or meaning of a number. Nothing about the calculation I have described is in principle impossible; it works as well if, in the description, you replace the word “googol” with the word “one”, though it does become impracticable if you replace “googol” with “two”.

And yet, it is not possible to calculate the full‑sized p at all; in fact, it is not even possible to find and write any googol‑digit prime in pi whatsoever, and never will be, for the adequate reason that, as far as we can tell, there are fewer than a googol atoms in the observable universe, so we could never find enough ink to represent it, even if we used just one atom for each digit in the number. Nor could such a number ever exist in any relationship in our universe at all, let alone meaningfully. As for calculating p itself ...

In other words, with relatively few trivial exceptions, everything we do with or about p is magic! The only clear exceptions are mathematical facts that are independent of the exact value of the integer, such as p0=1, p*0=0, p‑1<p and so on. And those are not facts that distinguish p from most other integers, so they have little to do with its value or meaning or nature or existence.

Firstly, there are all sorts of things we do not know about p, and never will know about it; in our observable universe “nature” itself does not “know” them: nothing in nature depends on the precise value of p, nor can determine nor represent them.

Let us think of some of those things. Everything we know about p is in the discipline of formal mathematics, because the number itself is not physically real.

Firstly, we do not know, and never can know, the value of p, nor of any particular one of its digits, not even the first nor the last digit unless it is represented in binary notation. We do not even know how many digits p has, or whether p is odd or even. It is extremely unlikely to be prime, but any firm statement about its being prime is necessarily a conjecture based on the expected frequency of primes in the neighbourhood of p, for which we do not have any precise figures anyway. We know at most one of its divisors, namely 1. The other divisor we can assert is p itself, but then we don’t know the value of p either, so that is hardly interesting.

We can guess a lot about p, such as that its digits are about as random as those of pi, but such things are no more than conjectures. But such conjectures may be such as to give one pause.

Formally we can deduce some things validly, from the formal nature of number theory. We know that the process of generating p is deterministic, so that every integer either is p or is a divisor of p, or it is not, and every integer except p is not p. Given our algorithm for generating p, p is unique. We can be sure that p is a unique finite integer, which implies that p > p‑1, that if we change or remove or insert or append even any single digit of p, then we get a different number, not p.

We also know that with negligible exceptions, p is the smallest of all finite integers.

Next, consider the ratio between the next larger prime and p: call that ratio "r"; r is a simple rational number, with all the attributes of rational numbers: for example we know that r has a repeating decimal expansion. And yet, it is not physically possible to distinguish r from an irrational number, or even a non-algebraic number; there simply is not room in our observable universe to accommodate the first repetition.   

This is an example of working with purely finite numbers, and of the limitations of finite systems working with finite numbers. In many ways I find them more interesting than infinite numbers.

As I see it, there is no way that p really exists; it is quite possible that no one has ever discussed this exact number before, and possibly no one ever will again. No relationship within our physical universe is described by the exact value of p, and if it could be, we could not write it down to express the fact. Even in formal mathematics we cannot distinguish between p and an indefinite set of other impracticably large finite numbers. We can describe the algorithm that (though only by magic) formally would generate p, and we can speak of numbers relative to the notional value of p, such as p+1, but really, I suspect that we know less about trivially small little p than we know about aleph null, the cardinal number of the integers.

For example, we know quite a bit about what we get if we divide aleph null by a finite integer or multiply it by any integer, or by itself. Or if we raise 2 (or p) to the power aleph null.

But aleph null itself, and in fact any other infinity, cannot exist in our observable universe except as the implication of a particular class of formal axiomatic structures.

In other words, magic.   

And in my opinion to claim that even such a small number as p exists, on the basis of formal concepts, is a meaningless claim. Not to mention much larger integers. For example, instead of raising each new prime to the power of the starter, we could raise it to the power of the starter, starter times. Interestingly however, the formal existence of the class of impracticably large numbers is as real as the formal existence of say, the finite integers in general.

That was the good news. On a similar basis, most of the integers smaller than p (those negligible exceptions) do not exist either. With even fewer negligible exceptions, they all are too large to fit into our observable universe.

Formally it is likely that one could in principle calculate numbers that are smaller than a googol, but that happen never to represent any real event or relationship either: not because they could not be calculated or written, but because there just never happened to be any relationship that they describe, that happened to occur in the universe. The p‑analogue one would get if you replaced "googol" in the generation algorithm with, say 7 or 13, might be examples.

To claim that such numbers exist because we can conceive them formally is not cogent. You might as well claim that unicorns or non‑Goldbach even numbers or odd perfect numbers exist because we can draw pictures of them or include them in our stories, or claim that snowflakes of frozen water exist on the sun because we can imagine them.

If we call that existence, then it is time to re‑examine our semantics.

So, as an conjectural lemma, I propose that the fact that a concept can arise out of a formal system, even if it is a convenient fiction in practice, need not imply that it is literally true in practice. One might speak of “Platonic meta‑existence”, where the idea of their existence, as opposed to their actual existence, demonstrably exists at least in our minds, and consequently has real, physical effects in our empirical world. But that does not imply their independent existence as amounting to more than the existence of unicorns.

Consider: in principle a zoologist could contemplate the nature of horses, and of horns, and describe, and possibly depict, a viable, true‑breeding species that has most of the physical attributes of fictional unicorns. Even if the head and neck and horn were less graceful than most of the pictures, no one would look at the result and call it anything but a unicorn, nor confuse it with an Asian rhinoceros, but we could hardly claim on that basis, that such a species exists.

Consider again: imagine somewhere in space, say between our galaxy and Andromeda, somewhere near their common centre of mass, a diamond of high quality, in free fall, with roughly the mass of the planet Earth; it could not be solid, because that would create internal pressures that would destroy the diamond crystal lattice near the centre, but it could be say, a hollow sphere, a bubble with a diameter many times that of Earth, and with a wall thickness of say, one kilometre, and with negligible rotational velocity or similar sources of stress. Except for the difficulty of imagining any plausible mode of its creation, nothing about such a diamond demands any violation of any laws of physics; we could calculate all sorts of things about it in great detail, and there is no shortage of the carbon necessary for its construction — if we were to create such an object, the material budget would hardly dent the carbon content of a single dwarf high-carbon star.

And yet, we are confident that no such physical object exists. We can justify our opinion in various ways, and on a basis of common sense reject any claim that it exists outside Plato's cave, which I reject as unsupported mysticism. It is unfalsifiable at best and arguably meaningless.

I propose that whatever else a number or any other abstract mathematical object is, it is information, and not in any imaginary abstract warehouse, but only where it materialises in some conceptual or material relationship.

Let us consider some purely formal examples. We already have considered p, which is a relatively small finite number, but what about infinite numbers? Infinite numbers could include the lengths of non‑terminating fractions, say 1/7, which is a repetition of 0.142857. Well, that certainly cannot be written out fully, because it does not have p digits, which could never fit into our universe; for most formal purposes it has aleph null digits, and such a string of digits never could fit anywhere.

However, the information content of the decimal expansion of 1/7 actually is remarkably low, and we can deduce a formally exact value from simple arithmetic and we can tell the value of any digit in the notional expansion, and we can convert the base decimal radix to another. For example, in base 49 the value of 1/7 is 0.7 precisely.

Which we easily can write as an exact, obvious value.

Well, consider some different numbers in decimal notation, such as 0.1010010001 ...

Here we have another number that formally has a unique definite value, and accordingly if you change any of its digits you change its value, making it either less or greater. We furthermore can describe it exactly and succinctly, so its information content is very small. It also is known that the number is transcendental, and, unusually among transcendental numbers, we can easily predict the value of any of its digits. However, if the exact value of that number is known, or whether it has any special attributes of interest, such as e or pi would have, I do not know of it.

And we certainly cannot improve matters by changing the base or anything of that kind.

Which seems curious for a number that we can define so easily, and find the value of each digit so easily.

And yet, we not only cannot write it as an exact value, but could not fit its exact value into the observable universe. Not as far as I know, not yet anyway.

Still, compare that value with say e (2.7182818 ...), or π (3.14159265 ...). Formally these two numbers have precise values with plenty of well‑understood contexts, but written in any base not related to their own value, they not only will not fit precisely into any number of digits, but we cannot simply tell the value of a particular digit without calculating it.

In spite of their being definable in formal mathematics, we can fairly say that physically they do not exist.

“But,” you might argue, “I can draw a circle with a compass, and add its diameter with a straight edge, and thereby immediately produce two lines with a precise mutual ratio of π.”

Oh no you can’t!

You would be doing very well to get four digits of precision, and no chance whatsoever of achieving forty digits, let alone conserving it if you could. And neither four nor forty digits would be significant in comparison to the actual decimal expansion, which certainly could not fit into the observable universe, neither as a number in positional notation, nor as any other representation of the same amount of information.

I claim that in terms of existence, such numbers no more exist than our spherical diamond bubble exists.

All of which amounts to much ado about not much?

Possibly, but watch this space.

 

Existence of Entities in Material systems.

The mathematical theory of structure is the answer of modem physics
to a question which has profoundly vexed philosophers.
But, if I never know directly events in the external world,
but only their alleged effects on my brain, and if I never know my brain
except in terms of its alleged effects on my brain,
I can only reiterate in bewilderment my original questions:
"What sort of thing is it that I know?” and ‘‘Where is it?
What sort of thing is it that I know? The answer is structure.
To be quite precise, it is structure of the kind defined and
investigated in the mathematical theory of groups.
It is right that the importance and difficulty of the question should be emphasised.
But I think that many prominent philosophers, under the impression
that they have set the physicists an insoluble conundrum,
make it an excuse to turn their backs on the external world of physics
and welter in a barren realism which is a negation
of all that physical science has accomplished in unravelling
the complexity of sensory experience.
The mathematical physicist, however, welcomes the question
as one falling especially within his province,
in which his specialised knowledge may be of service
to the general advancement of philosophy.

Arthur Eddington

So much, I said, for the non‑existence of numbers or mathematical values of random and indefinite representation. What about numbers of a more tractable form, such as integers or simple fractions? It takes very little information to represent say 2, or ½ to indefinite precision, yes? Say 2.0 and 0.50. Right?

Well no, not really. Not if that precision really counts.

Consider: π begins 3.14159265358979323846264338327950 .... Usually such precision would be insane in any material system, and even in most formal contexts, but formally we know that if we changed that first 0 by adding 1 to it, we would no longer have π, but a different number. Not just partly different, but formally speaking, a totally different number, as truly different as if one had changed every known digit in the number. And the same would apply equally to adding 1 to the millionth 0 in the decimal expansion of π.

And in that respect of formal definition of numbers, how does 2.0 or 0.50 differ from π?

They do not differ at all.

If we change their 32nd digit from a 0 to a 1, then both of them change as inevitably as π did. To define them absolutely, both of them should in theory be represented as being expanded into a repeated 0. There also is the option of their being represented as 1.99 .... and 0.499 ..., but that still makes no difference. A 1 added at the same position in any representation changes their values by the same amount.

This is different from the case of physical objects; our diamond bubble would remain practically indistinguishable from perfection even with imprecision of the order of tonnes rather than femtogrammes.

And yet, we go ahead and use the formal numbers as if they were absolutely correct, and formal maths works in theory and applied maths works in the face of our use of numbers that do not exist; what does that tell us?

It tells us all sorts of things, including the fact that the amount of material we need in our aircraft wings, and how much fuel we need in the tanks of our vehicles if we wish to have a successful journey, are based on rough data and rough calculations, not indefinite precision.

Just suppose that someone said that we needed the wing to be able to take a loading of 159.265358979 tonnes, such precision would imply a load accurate to roughly one billionth of a tonne. It would mean that another milligram one way or another could make all the difference between success or failure, or a waste of material that would be unacceptable even though it happened to be too small for anyone to feel it in his hand.

Which is of course nonsensical.

In practice we do not, and cannot, and would not want to, make anything exactly big enough and strong enough and no more. We rarely make it even as little as 50% stronger than we expect the demands to require, and engineers often make things several hundred percent stronger than is formally necessary. Oliver Wendell Holmes was only joking when he wrote of his “wonderful one‑hoss shay, that was built in such a logical way, it ran a hundred years to a day” and then collapsed into a pile of dust when its period of duty had expired.

Similarly, when we measure something, it is seldom that we measure to an effective precision of more than four decimal places, and I believe that the record is about 14 decimal places. One part in about one hundred million million.

Pathetic.

But suppose we could get it down to beyond Planck’s constant, say to fifty places;. wouldn’t that be exact?

Not compared to a million places, and certainly not to a formally infinite number of places.

Our universe decidedly does not work that way.

Everything is made of particles in some sense, and to measure beyond the finest‑grained limits that atomic (“non‑splittable”) particles permit, just will not happen.

Now, by the time that anyone with any idea of quantum mechanics (QM) has read as far as this, he no doubt will be seething at such nonsense, because QM simply does not work that way either.

Yes, yes, relax. I know. But I have a deeper object in view, so I largely ignore QM in this essay. For the most part the “atoms” I consider are fictional: rigidish, roundish particles.

But there is one item of quantum theory that is too good to ignore. We cannot specify any real value to infinite precision, can we?

Oh so? Then what about 0?

Sorry, no luck. 0=0.000 .... Change any digit and you have a different number.

Right? Then how can we have empty space, with 0 mass and reality at any point?

A very logical question, and very correct too: and as it happens, we cannot.

And we do not.

 Every Planck volume in space, as nearly as we can get to characterising it, keeps oscillating about the presence and absence of a particle or charge or mass‑energy effect, and the oscillation has all sorts of physical implications — real physical consequences, no unicorns required.

For example, from the most perfect vacuum that we could get, or that we could find in space, we get vacuum noise, "vacuum fluctuation" or "quantum fluctuation", that can be physically measured electronically, and that has various established effects. There is nothing magic or imaginary about that.

For another example, we get black hole radiation (read Stephen Hawking’s “A Brief History of time”. if you want more detail).

Another effect of the consequences of vacuum fluctuations is that if you strip say, a uranium atom, of all its electrons, which would give you a U+92 ion, the local space‑time concentration of positive charge around that uranium nucleus is so intense that it soon strips electrons from adjacent vacuum fluctuations, and emits the matching positrons that necessarily maintained the overall neutral charge. The positrons soon combine with ambient electrons and vanish in the form of gamma rays. Meanwhile, as the uranium orbitals fill up, the ionic charge reduces, so that say, a U+6 ion is reasonably stable, practically unable to snatch vacuum electrons.

Some people even suggest that the Big Bang that resulted in our observable universe, started out as a vacuum fluctuation, but I cannot comment on that and it is a bit off topic, so I’ll pass it by.

Whether you regard such things as frightening or beautiful or both, is up to you.

And whether it helps you in wondering about why there is something instead of nothing, is also up to you.

And so is wondering about what "nothing" might be.

 

Euclid’s and Others’ Excesses

The point is there ain't no point.
Cormac McCarthy

This will be a frustrating section for everybody: anyone knowing basic mathematics will be grinding his teeth at obviosities, and anyone else will be wondering what it is all about. For anyone who would like to know more, read Wikipedia’s article: Interval (mathematics). It is a good one and will tell you far more than I will be dealing with.

Now let’s get back to Euclid.

Euclid was one of the pioneers of formal mathematics, though I doubt he realised it at the time. There is a tendency to sneer at him nowadays, but I count him as a heroic genius of classical days. He posited all sorts of things that he called axioms, propositions and what not, but as I see it, in modern terms they all can be classed most reasonably as axioms and theorems. Exactly which axioms one chooses for given purposes depends on the formal disciplines one is working in (and applied disciplines too, where appropriate).

Now, much of the Euclidean style of thought is very basic to much of our current everyday science and technology. It is simple, convenient, and for practical purposes, generally close enough for jazz. Much as a map for navigation of any small region of a planet can cheerfully be based on the implicit assumption that the Earth is flat, so we can assume that a given position on the map is a point and a given path is a line. However, for some fields of study, not only are such views inadequate, but even where they are adequate, they are not even metaphors for reality, but just rough pictures, placeholders if you like, or analogies.

For example if I am illustrating the theorem of Pythagoras and draw freehand lines with chalk on a board, or in sand on a beach, or in pencil on paper, then the "points" I mark are not points, and the lines, apart from not being straight, are not lines at all. A point in physical reality is not at all a point in any sense that Euclid used. It is not even an infinitesimal, as we encounter in calculus. Nothing in everyday physics can possibly be anything of the kind

Consider: a formally true, genuine point according to Euclid, a Euclidean point, has a measure of zero in all dimensions except time (such as the point is, it lasts for some time, so, by that very fact, any true point has a world line, or, depending on the frames of reference of notional observers, has indefinitely many world lines or world volumes).

As far as I know, Euclid never thought in such terms as time being a dimension, nor of the world line of a point, but, also as far as I know, he never contradicted them either. Still, the Euclidean point unambiguously and explicitly has zero length, zero breadth, and zero height. Zero measure in every way but time. Not just small, not tiny, not microscopic, not negative, fuzzy, or hollow either, but absolutely, literally, and precisely zero. No more, no less. No digit in its expansion to any base can be anything but 0, either in practice or theory.

The definition of a Euclidean point is not without ambiguity. In most of mathematics, including, as far as I know, Euclidean geometry, there is no definition of a point or a line. Some works do use a rather hand‑waving definition that I have relied on for as long as I can remember: a point is a position. To put that differently, possibly more precisely, though still informally: as an entity a Euclidean point cannot comprise anything more than coordinates in whatever happens to be the relevant number of dimensions. And only one infinitely precise coordinate in each dimension.

If this conflicts with any particular usage, then that usage is an arbitrary deviation and we are no longer thinking in terms of what Euclid’s ideas were.

And after all, Euclid got there before we did.

I am not sure where that leaves us with fractals, but you can't have everything. If you could, then, as Steven Wright said about having "everything": "Where would you put it?"

Similarly, a line is defined as having precisely zero measure except in length (and of course time), and a plane has zero thickness.

Now, all that sounds pretty simple, but some people tend to miss some of the implications.

First, think of a line, and assume for convenience that it is straight.

Euclid assumes that it is possible to select an arbitrary point on that line, say by putting the point of your compass on it. The way that you can recognise that point is by its coordinate on that line.

So select your point, and label the coordinate of that point 0. (After all, why not 0? It's a nice number!)

Then select a different point on the same line (meaning that it has a coordinate different from 0, on that same line).  Call that newly selected point 1.

We call the line segment or interval from point 0 to point 1, a closed interval, meaning that it includes the two end points (0 and 1 in this case) and also includes all and only the points in between 0 and 1. In particular it includes no points outside those end points.

Furthermore, every coordinate along the line matches exactly one single, unique point. No point lacks a coordinate, or needs more than one coordinate to define it, for each of the number of dimensions of the space occupied by the points in question. For example, a line being 1‑dimensional, only one coordinate is required per point. In a plane (2‑dimensional) we would need two coordinates to define or label any point.

Now imagine that we delete just points 0 and 1, and no other points, from our chosen line segment. This leaves an open interval: it is called “open” because it has no first and no last point to cap either end.

This applies in any field of study in which a point in a continuum is a concept. In some disciplines that do not deal with continua, such as versions of lattice theory or of finite sets of discrete elements, in which isolated points are defined, it is not relevant, but I am dealing with continua only, for now anyway.

But, one feels, that is nonsense anyway, because having removed those two points, we have neither added nor removed any other points that had been in the line before we had defined our 0‑to‑1 interval, and what is more, there had been nowhere along the line without its unique point. So the next point along must now be the last or first point on the interval in the place of the previously removed point.

Yes?

That sounds very sensible, but it does not work out. After all, what was the length of the points you removed?

Zero. Points have zero length. Not nearly zero: not more nor less than 0: exactly zero. Remember? Otherwise they are not points.

Well then, if we had contemplated a line segment say, 3 units long, then the coordinates of the end points would have differed by 3 units, including the lengths of the two end points; the length of the segment would be that of the line (3 units) plus that of the two end points, giving 0+3+0, which adds up to 3 units exactly, not even a proton diameter out. But if two points are next to each other, with 0 length of segment between, then the length that they span is 0+0+0, which still adds up to 0; by simple arithmetic their coordinates differ by 0: the length of one point.

So the coordinates differ by zero. (In formal mathematics they call zero the additive identity: adding the additive identity to anything, including adding zero to zero, gives what you started with; it does not change anything. That is exactly why they call it the additive identity.) But it follows that if the coordinate of point A differs by zero from the coordinate of point C then they have the same coordinate. But a point has nothing except its coordinates, so any number of references to exactly the same coordinates are references to exactly the same point, and no other point, though possibly under different descriptions, say for example: "two" and "the even prime" and "the cube root of 8" would be the same coordinate.

In other words, those coordinates define, not nearly the same point, not close, not next to each other, but precisely the same point: in talking of points, “next to” makes no sense unless it means the same point, which in most senses makes very little sense indeed.

So it also makes no sense to speak of the first or last point of an open interval.

There is nothing new or obscure about this; it certainly never was my invention.

Well then, let us speak of two points with different coordinates. "Different coordinates" means "not in the same place, but a greater‑than‑zero distance apart, even along a straight line". Somewhere in those two points’ coordinates, at least one digit in one of the coordinates will differ from the digit in the matching position of the other coordinate. Call the differing points A and C. No matter how close they might be, it always is possible to name another point halfway between: add their coordinates together, and divide by two. That gives a new coordinate: say coordinate B. If A is at coordinate 0 and C at coordinate 1, then B would be at 0.5 and if you repeat the operation, the new point would be at 0.25, then if we repeat the process, at 0.125 then 0.0625, 0.03125, and so on.

Very important: notice that at each halving the coordinate grows longer, requires more digits, or changes in digits. This is an example of the need for more information in finding something smaller and smaller.

Imagine you were looking for someone in say, the United States: well, it is easy to find the country: it takes possibly eight bits of information to pick that country out of a list. But that person: where to find him at home in the whole of the US? Not very helpful! Narrow it down by telling us the state. It takes us perhaps six more bits of information to pick a state from a list: say California; California is smaller than the United States and the fourteen bits are more helpful than those first eight. Well, California still is a bit big. How about the town? Say another eight bits? How about the street? Possibly ten more bits .... and so on.

And suppose that instead of looking for that person, you were looking for a particular freckle somewhere on his skin. That might require another 20 bits or so.

But in these material examples, we always have a final non‑zero target, a final turtle to aim for in the stack. Looking for a final coordinate along a formally Euclidean line however, we keep getting smaller and smaller line intervals, but because the length of each of the points along that line is zero, this means that a short line contains exactly as many points as a long line. No more, no fewer.

This is something that is easy to demonstrate in even the most naïve Euclidean geometry.

It follows that formally there is no final turtle in specifying something smaller and smaller. One keeps having to add at least one more digit of information to specify it, to address it. The information doesn’t have to be binary; we get the same effect if we chose each new point one tenth of a unit away from the previous point instead of one half of a unit. Information is information, no matter what the medium or notation might be. We need not even be using information expressed in digits at all, but we still need the same amount of information to locate anything sufficiently small in a sufficiently large space.

We might use a ruler to identify the segment when we begin searching for a particular line segment, but looking for smaller line segments, soon that would no longer suffice and you would need say, a Vernier calliper, then a micrometer, then a microscope.

Startlingly soon, no possible instrument could be powerful enough to supply all the information you would need to find really small segments.

And apart from addressing segments, ultimately to address a specific point would take infinite information.

In no observable universe is there room for infinite information.

You might argue that the example is artificial. In looking for little things we don’t always need to write out long, long strings of numbers.

No, that certainly is true, but you certainly do need the necessary information one way or another. Information is information, no matter how you convert it.

But we select points on lines every day without all that fuss! Think of a construction on a chalkboard!

You do, do you? Fat chance!

First of all, a chalk line isn’t a line at all; it is a layer of powder with fuzzy width, thickness, length, and even mass. A chalk line is about as real as a mountain range, and about as hard to map. In fact, if you look carefully, you even can see a fringe of separate particles around the so‑called chalk line, like boulders around a mountain.

Some “line”!

Well then, let’s use a diamond point or a laser to score a microscopically invisible line on a polished diamond surface. We would have to use an atomic force microscope to see it.

Sorry, not categorically better. Even if your invisible scratch shifted only a single row of carbon atoms, that row still would be tens of picometres wide and deep, or high, depending on how you worked it. Formally a picometre‑long line contains as many points as an exametre‑long line. Never mind carbon, even a single hydrogen atom is some 100 picometres in diameter, and you are no closer to selecting a specific point even across your "line", never mind along its length. The point you thought you selected is no point at all; it is a great big smudge or heap or hollow.

And notice that we are not even considering quantum effects here: we are behaving as though atoms were neat, clearly defined spheres. We are ignoring practical problems such as the floppiness of material on microscopic scales, or their constant stretching or shrinking or vibration in response to changes in pressure, Brownian movement, or temperature.

In other words we are assuming magic, and even the magic isn’t helping us here.

All right, so who cares? What possible practical difference could that make in our lives? So the stripe I draw isn’t a line and the dot I choose isn’t a point. So the real point would require infinite information, but I only get a few bits worth of information, so what? Aren’t a few bits all I need?

Not necessarily. The point is that, no matter how valid his assumptions might have been in formal terms, in physical terms Euclid was wrong about being able to choose a point on a line (no point makes sense, and neither does any line make sense). And if we could choose a single point magically, we never could get back to that same point again without more magic. Watch to see how that destroys the very concept of determinism.

As we shall see, this arises from confusion of formal with applied, empirical principles — everyday reality.

Where we use mathematics to deal with physical realities, we perpetrate a fiction. We pretend for the sake of convenience that the entities we work with are mathematical objects, whereas they are not; the point that we make with our pencil on the paper is a pile of graphite or similar pigment. To be useful to us it must behave sufficiently similarly to the behaviour of the mathematical point. We speak as though we were dealing with an isomorphism, which it is not; it is a plesiomorph, something sufficiently nearly of the same logical form, to suit particular needs; for example, I calculate where the centre of a circle is on my drawing board, or construct it with straight edge and compass. Then I mark it with a fine pencil. Is that mark the centre? Mathematically no. The centre is a point, not a pile of graphite. Is it an approximation? No. An approximation is a figure that approaches to a desired result with desired precision — so many decimal digits. The spot of graphite (if you worked carefully enough) may cover the mathematical centre of your notional circle, but it is a picture, a fiction, much as the drawn circle also is a fiction. The pencilled dot has area, has volume. But as a fiction, a plesiomorphism, it commonly is good enough for drafting purposes.

In the set of natural numbers or ordinal numbers one can arrange things to be physically meaningful, because one only needs enough information to distinguish one value from another, even if the values barely exceed Planck dimensions, but as soon as one deals with a continuum, such as a line, area, or volume, there is nothing that can physically identify a point as distinct from every different point, because the required information is infinite.

 

Nothing Determined

Synergy is the only word in our language that means behavior of whole systems
unpredicted by the separately observed behaviors of any of the system’s separate parts
or any subassembly of the system’s parts.
There is nothing in the chemistry of a toenail that predicts the existence of a human being.
Richard Buckminster Fuller

Let us magically construct a laboratory. It is isolated from all vibrations or sources of noise. Except for precisely vertical weight, all gravitational effects, including tidal effects, including the tidal forces of experimenters’ heartbeats, are neutralised by precisely distributed masses or whatever might be most appropriate in principle.

In a perfect vacuum in one chamber we have a perfectly hemispherical, massive, rigid body, immune to abrasion, vibration, dust, and impact. Suspended exactly above its centre in terms of the local gravitational field we have a perfectly spherical and symmetrical ball of similar material. Around the chamber the masses we have distributed would neutralise any gravitational attractions other than directly up and down.

Remember, this is magic; we could not do anything of the kind in practice!

Furthermore, we ignore our inability to make a perfectly spherical ball in physical fact, because all real materials of visible sizes, are made from atoms, and that fact forces them to be submicroscopically lumpy.

Now, we release the suspended ball, and it hits the convex surface below and bounces vertically above the unique Euclidean point where the tangent plane above the lower ball and below the bouncing ball is exactly horizontal. According to Newton’s F=ma it cannot do anything else, because anything else would require some sideways force to make it strike anywhere else. And it will keep bouncing up and down on the spot indefinitely till it runs out of restitution, after which it will remain balanced stationary on top of the lower ball.

Right?

In fact we could get swanky with our magic, and could equally well balance a vertical stack of rigid, frictionless balls above each other without their rolling off — if they all were of the same mass, the effect would be the same, with only the top ball bouncing. If you find such ideas amusing, you might like to imagine what would happen if their masses differed, or if you bounced more than one ball at a time in the same stack. Newton's cradles would pale in comparison to our virtuosity.

Of course, anyone with a knowledge of Quantum Theory will be muttering about Planck’s constant and Brownian motion and so on, but remember that we and our magic can afford to ignore those.

All the same, magic or no magic, some facts remain. Mathematically speaking, when one ideally rigid ideal ball ideally free of horizontal forces, rests or bounces on another ideally spherical surface, there is precisely one Euclidean point at which it can balance without rolling off or bouncing away. No matter how little it wanders from that point, or in which direction, it will on every subsequent bounce wander further and further and faster and faster in the same direction away from that point. In real life, you will be doing well if you can get a steel ball to bounce on a lower ball even twice, let alone three times. To get it to bounce repeatedly and come to rest on the lower ball, is literally (not figuratively) impossible without interference, even with just two balls, one above and one below.

As for a vertical stack of multiple balls.

What is more, the direction in which the bounces finally would diverge, would be genuinely random if the experiment as described is competent and honest, with even a little magic.

Consider: we said we had managed it by magic. For the first bounce at least. This means that when it comes back it must hit that same point exactly. Exactly, not approximately, meaning precisely, mathematically, zero deviation.

But if we only assumed magic for the first bounce, then by now we have run out of magic, and we would need more magic to hit that Euclidean point again, to hit it even once more, because we also know that to hit any one point with another point, we need infinite information.

For which our observable universe has no capacity, remember? Not without magic anyway.

So on the first return the impact will be off centre by anywhere from zero to some small fraction of an atomic radius. Exactly how far and in which direction we cannot say, because we would need more information to make any slight guess with anything better than zero justification.

The very fact that the information is lacking (it is in fact non‑existent in any meaningful sense) means that when the path of the ball deviates, the deviation must be random, because non‑randomness in any respect would imply matching information. Given any relevant information, it becomes possible in principle to predict the outcome to some matching, but limited, degree of precision.

That is why precise selection of any unique point is not possible even in a notionally non‑quantum, non‑atomic world: not possible to humans, and not possible to whatever amounts to the nature of our world.

Of course this whole exercise is a drastic over‑simplification, because in real life there are too many sources of noise, some notionally (meaning “not really”) deterministic, most not even that. That is part of the reason that we need magic for our exercise, though it is not the fundamental reason,

The “observer effect” in Quantum Mechanics (which from the very first I heard of it as a youngster, I always have been convinced has nothing whatsoever to do with anything like consciousness in an observer) is only one example of noise. The atomic nature of matter, with the Brownian motion of particles, together with the granular nature that it implies, is a related example. Not even to mention vacuum fluctuations.

But for reasons that I hope to establish, I ignore such things even though in practice they dwarf the effects that I discuss.

The conceptual effects of the principle are dramatic, even though they are not easily observable in practice. More than two centuries ago Laplace showed that a literal application of Newtonian mechanics implied that every motion in nature, including all the motions in the entire universe, would in principle be perfectly reversible, so that if we magically reversed the momentum of every particle in the universe, time would in effect begin to run in reverse from that instant. This is of course something of a parody, because it does not allow for various forms of hysteresis, phase changes and symmetry breaking, but there are other, more fundamental considerations as well, some of which I shall discuss.

Remember in particular the impossibility of specifying a formal mathematical point:. it would require infinite information just to determine a future straight‑line trajectory of any particle, never mind its magically reversed trajectory. Secondly its magically reversed trajectory would not be the reverse of its originally forward trajectory, because that too would equally necessarily require infinite information.

All this repeatedly implies that the formal, classically Euclidean and Newtonian predictions of trajectories are no more genuinely representative of real trajectories than the lines we draw on a paper are representative of Euclidean lines with zero thickness and selectable points of zero length. They are not even metaphors, arguably not even abstractions; they amount to impressionistic pictures, or at best maps.

And proverbially, the map is not the territory.

One of the consequences of all this so far, is that references are not perfectible, and therefore meanings are not perfectible either; they depend on references: ‑ we cannot identify anything perfectly because perception is mechanism; we cannot conclude explanations; and meanings can only be as secure as teaching or learning permits.

Also, the quest for fundamentals is grounded in the process of teaching and learning. We learn QM from observations of the real world and to the extent that we can, we explain the real world, largely in terms of QM. Similarly, we try to derive our conceptions of the fundamentals of the real world from our observations and then explain the real world in terms of those conceptions of the fundamentals. Such a process cannot formally demonstrate indisputable fundamentals ‑they can only approach fundamentality to the extent that they are the best and most effective that we have.

From our observations of the world and our interpretations and preconceptions, we arrive at impressions and notions of determinism and probability; we can think of them as patterns, and we hold to these notions and patterns for as long as they remain our best working hypotheses. The idea that the world actually is deterministic or probabilistic is an intellectual convenience. It is as close as we have gotten so far, to perceiving the world as it actually is, always assuming that it actually is anything that we actually can perceive. This is as good as it gets, as far as we have been able to tell so far.

 

Determinism, Information, Time's Arrow

The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it.
Omar Khayyam

By now I think I have said more than enough about Brownian motion, QM, vacuum fluctuations, and finite information, to dispose of the idea of true determinism. altogether.

However, the way the world works entails a good deal more than determinism. Certainly our world has chaotic aspects, to the extent of justifying “chaos theory” as a special branch of science; all the same, the way it works brings forth a great deal of order as well. There is enough of such order to support impressively useful practical predictions. In particular, in spite of the howls of certain holists, even though we would need to know everything about the universe to know everything about anything at all, we do get on fairly well, just knowing very little indeed about anything at all.

We can plan, we can build, we can aim, we can adjust and correct, and we can achieve results that generally satisfy us as being what we were trying to achieve, whether it was to build the skyscraper, or pot the billiard ball or hit the bullseye or toss the die. Not everything requires infinite precision; ten or twenty bits will suffice for most everyday purposes. For golfing and fishing stories even ten bits will usually be excessive.

Now how does this come about? People speak airily about "approximation" as if the word explained everything. Actually, apart from not explaining anything, the word commonly is not even used correctly at all. But what is it about our world that permits vaguely causal results to have even vaguely satisfactory outcomes and successful predictions?

Consider a billiard ball hanging peacefully in its pocket. That was the state we had aimed for, was it not? Aiming carefully with the cue at another ball, we had caused it to follow a planned path so that it struck this ball that in turn followed a planned path that deposited it into that pocket and nowhere else.

This is not easy to argue consistently, but our difficulty arises from our simplistic view of the problem. As a rule we assume that there are two possible states: ball‑in‑correct‑pocket, and ball‑not‑in‑correct‑pocket. The fact is that "a billiard ball in the pocket" is not a well‑defined unique situation, any more than a dead (as opposed to a live) cat in a box is a well‑defined unique situation, or even class of situation. Physically there are indeterminately many distinct ways for the ball to be in the pocket (such as which way up it lies, and which fibres of the pocket it is in contact with) and even more distinct states for it to pass through in getting into the pocket. Any of these would suit the ball‑in‑pocket case, but they form a set of acceptable outcomes, not a definable unique case. They differ from a spin‑up or spin‑down electron in a given orbital of a given, isolated atom in a given location. We could regard the spin as a binary case — a case of precisely two possibilities. Even that is simplistic, but it is close enough, compared to the macroscopic cases that involve tens to the tens of particles in particular states.

All the same, there are indefinitely more ways for the ball to miss the pocket than to land in the pocket. Otherwise there would be little point to such games at all.

And the amount of information required to pot the ball, is related to how many times more ways it can go elsewhere than into the pocket, rather than how many ways it could rest in the pocket after a successful shot.

It might be easier to visualise instead a game of darts in which the winner is the first to put a dart into the inner bullseye. Again there is no simple limit to how many states there are that comprise a winning throw, but there are many times more ways to perform a losing throw than a winning throw. And the more sound information one can apply to directing the throw, the smaller the probability of a losing throw.

Now, the control of the transition from one state to another depends on the available information and energy. Energy we can ignore in this toy exercise, but information is of the essence. The required choice is any one of the winning states. If there were no information available to direct the shot, then the probable outcome would be a losing state, and in most games there would be many more losing than winning states.

However, an effective player generally can apply some information to direct his shot, and the result would be to bias his result towards a winning shot. Let us suppose that he needs one bit of information to hit the board at all; then the chances are equal for any point on the dartboard, with say, roughly one throw in 4000 hitting the bullseye. But then suppose that a few bits more would improve the probability of getting near the centre, giving a normal curve centred on the bullseye. Immediately the bullseye becomes the area with the highest frequency of hits of any similar size of area on the board, even though an actual bullseye hit still would be fairly rare.

The more bits of information that affect the precision of the shot, the steeper the normal curve and the sharper its peak. My description and assumptions are too vague for detailed prediction, but depending on the physical details, somewhere near ten or twelve bits should be enough to put most of the darts in the bullseye.

Now, for most purposes that sounds very persuasive, and we count each dart as landing on a particular point, preferably in a the bullseye, ideally a circle of about four millimetres in diameter, but the reality is nothing of the type. The "point" is actually about two millimetres across; the player is trying to get a rough circle where the point lands, to overlap a larger rough circle of the bull’s-eye by a sufficient margin, and for the dart to strike at an effective angle with effective force. When there are a large number of possible ways for the dart to strike, then the smaller the proportion of acceptable numbers of ways for the dart to land, the more information one needs for an encouraging chance of a winning throw.

In some mathematical descriptions we speak of the set of values that might be acceptable or unacceptable (or of any other relevant parameter) as the "spaces" that overlap or not, as the case might be.

An old joke is that of a desirable girl in the middle of a large floor. An engineer and a mathematician are offered the opportunity to approach her, but each minute they must just halve the distance between themselves and the girl. The mathematician indignantly refuses, because any fool can tell that 0.5 to the power n never reaches 0. The engineer accepts eagerly because after a few minutes he will be close enough "for all practical purposes".

Ilya Prigogine was one of the most cogent exponents of the concept of an arrow of time. I have repeatedly been taken aback to find how seldom his name crops up in such discussions. Granted, he did not have much to say about clocks in particular, but he did show the nature of irreversibility in physics, which intrinsically implies an arrow of time.

Add the tocks and you get the clocks.

Now, one version of the tocks is the ubiquity of events (a tendentious remark, given that some definitions of "event" are a lot more constrained, but I use the term here. to mean anything distinguishable that can result in a change in entropy). In other words, wherever anything can happen, time "flows", "passes".

But even in "empty space" vacuum fluctuations happen. They might (or might not) involve entropy, but I speculate that their happening could be enough to make time pass even in an otherwise empty universe. Time does not need a clock to pass, any more than a river needs a flow meter to flow.

And even if vacuum fluctuations won't cut the mustard, in a universe as full as ours, there could be enough happening in the universal increase of entropy to keep time on the go. Anywhere that the physical effects of an event could be experienced as information, time would pass, whether quantified or quantised or not.

Like the quantum physicists, I am as exercised as ever about how the time of quantum mechanics can be reconciled with the notion of time as the fourth dimension in Einstein’s general theory of relativity, but I'll watch this space.

 

 

 

 

 

 

 Remark

At the following link there is an article on a topic related to this one. It has a long string of comments at the end, including several of my own:

 https://www.quantamagazine.org/does-time-really-flow-new-clues-come-from-a-century-old-approach-to-math-20200407/