Tuesday, May 24, 2022

Hilbert's Sixth Problem; Some Thoughts Arising

Hilbert's Sixth Problem;
Some Thoughts Arising

Abstract  1

Introduction  1

And in conclusion... 2

Primitives, entities, emergents and theorems. 2

Questions and question begging   7

Axiomatics and assumptions  10

On the physical nature of formal disciplines  12

On empirical physical science as an algebra. 13

On mathematics and axioms as applicable to physical reality  15

On physical axioms and underdetermination  16

Randomness and information  19

How much axiom is enough?  21

How much axiom is too much?  24

Hilbert, Axiomatics and Meta-axiomatics  27

Formal and Empirical Science  29

Information, Observation, Measurement, and Description  31

Formal and Physical Mathematics  33

Finally... 37

Reference: 38

 

 

Abstract

In Hilbert's list, problem number 6 (Hilbert-6) was the only one that strayed beyond formal mathematics, encroaching on empirical physics. This arguably was the main reason why it suffered from its timing: unforeseen 20th century progress in physics, information theory, axiomatics, and more, cast doubt on its conceptual basis. What was worse, was the dissonance arising from its question-begging definition, and its unclear motivation; it is hard to imagine the limits of the scale, substance, and application of Hilbert-6. On reflection, it raises topics that deserve attention in applied philosophy of science. In contexts of empirical physics, this essay discusses the relevant nature of axiomatics, primitives, emergents, and physical analogues of formal algebra. It suggests the interaction of physical entities as the embodiment of a physical algebra, that by its very nature might guide the conception and application of relevant axiom structures. It proposes concepts related to Hilbert-6, its scale and relevance, and the implications of extending the concepts of formal theory into empirical fields.

 

Introduction

It is a good rule in life never to apologize. The right sort of people do not want apologies,
and the wrong sort take a mean advantage of them.
P. G. Wodehouse The Man Upstairs

 

Though I had known of Hilbert's list of problems by reputation, I am neither a mathematician nor a philosopher, and had had no idea of the content of his problem six until I was invited to produce some thoughts on the subject. This essay is a subsequent overview of the points that arose in examination of its implications. It is a combination of two essays I wrote, which were published as contributions to an online symposium, published at:
Oleg Vorobyev (ed.), Proc. of the XX FAMEMS’2021, Krasnoyarsk: Siberian Federal University Press, ISBN 978-5-6045634-1-0

For brevity, I call this problem "Hilbert-6". My ignorance was such that I had to read introductory material on the topic before I could respond, and I was surprised to see that it dealt with a class of problem in physical science and philosophy, arguably of epistemology, rather than mathematics in any narrower sense. In particular it was the only one of Hilbert’s twenty-odd problems that was physical rather than mathematical.

This was not much comfort, because I am no epistemologist or physicist either, but friends suggested that I might as well think on the matter, because any reactions I produced need not be published if they amounted to nothing good and new.

Unfortunately my ignorance of the field guarantees that most of the new things I produce, good or not, will turn out to be old to anyone but myself, but I cannot afford to research every notion that merits discussion, so I elect to leave it to the reader to identify which of my discoveries are rediscoveries and which are outright blunders. This is one reason why I present so few links to published material: I have not read much.

One thing I do not apologise for however, is the informal tone of the essay; it implies neither disrespect, nor lack of commitment.

 

 

And in conclusion...

`Let the jury consider their verdict,' the King said, for about the twentieth time that day.

`No, no!' said the Queen. `Sentence first--verdict afterwards.'

Charles Dodgson

Speaking as a layman in the field, my impression of Hilbert-6 is that it was atypical of Hilbert’s problems in that it was outside his primary field of expertise, which I see as principally mathematical. Perhaps for that reason, though his statement provided a fertile stimulus for discussion of a wide range of concepts and implications, it was internally vague and arguably incoherent.

He failed to clarify what his intended field of application was, or how to determine his primitives, operations, objects, or objectives. He spoke in terms that suggested that he believed that physical measurements or descriptions could in general correspond exactly, or at least arbitrarily closely, to physical realities, which in fact they cannot. He gave no indication of recognising the concept of the intrinsic limitations to available information, nor of their implications in terms of plesiomorphism, a concept that I discuss later in this essay. Personally, I have no idea of how to approach the construction of a mathematics of plesiomorphism, but I think something of the kind would be necessary before we could expect anything like an axiomatic structure that would satisfy anything like his inspiration for Hilbert-6.

I suspect that if (impossibly) Hilbert had been sufficiently clairvoyant to predict the scientific advances in the five years following the publication of his problems, let alone the following fifty years, he either would have reworded Hilbert-6 or abandoned that problem outright.

Irrespective of anyone’s reservations and criticisms, however, Hilbert-6 remains of value as a stimulus to lines of thought much neglected in publications on the philosophy of science.

Primitives, entities, emergents and theorems.

If you simplify anything to the point that the simple-minded fail to understand it,
they will insist that you have made it more complicated

The fundamental nature of an algebra might or might not be regarded as a primitive, but because I use the term in this essay, I need to establish the concept early. As I see it, any algebra may be described or defined as a set of objects (primitives?), plus a set of operations (also primitives or algorithms derived from those primitives) that may be performed on those objects. (This definition is not my invention, but I like it.) Within any particular algebra, the underlying axioms, and theorems based upon those axioms, must define, or at least constrain, the nature of its objects and operations.

In Hilbert-6 the objects of our algebra of reality not only are poorly defined as yet, but indefinitely varied in principle. The definitions or descriptions that I offer here are for convenience and accordingly are informal. My hope is to reduce confusion, because no class of the objects of interest is uniformly defined, nor does any such definition escape criticism or outright rejection, by at least some philosophical schools.

By a primitive, I mean a term or concept that in context cannot or need not be defined in simpler, more general, or more elementary terms. In formal topics, axioms commonly specify formal primitives. Whether axioms too are primitives, and if so, in what way, I leave as an open question. I am uneasy about defining primitives, and more than uneasy about a notation for their definition, especially in axiomatic form, but in this essay I might vaguely discuss some ideas concerning primitives.

As I see it, at least the concept of a primitive necessarily is relevant in any process of reasoning, whether formal or physical, though I am unsure of any particular physical primitive; I fact I am uncertain that it is possible to demonstrate any compelling scheme of physical primitives in which the only true, logically independent, primitives are inarguably identifiable, but I propose as a tactic of convenience, that "primitive" is itself a primitive. 

Note that I cannot even assert that there are such things as ultimate physical primitives — the concept might not be meaningful in context. Personally I cannot imagine how a physical universe without physical primitives could exist, but I cannot assert that my failure of imagination proves anything. However, for the purposes of this essay I conjecture that some forms of physical primitives are in principle definable in our universe, and that any fundamental physical algebra would have to be expressed in terms of such primitives.

The concepts that I discuss in this section, I not so much propose, as muse over. I am reasonably sure that any of the entities I discuss here as primitives might not be true primitives at all, and also that some that appear as primitives in particular schemes, will turn out in other schemes to be derived, or will swap ranks with other concepts that seem derived here, but can be regarded as primitive instead of my proposed primitives. Commonly in a particular universe of discourse, we find examples in which we have a choice of regarding P as a primitive and D as a derivation of P, or the other way round.

Note that several of the primitives I propose are not mutually independent, and their interrelationships are arguable, often self-referring, reflexive or recursive; one must be cautious in trying to apply logical axiomatic purism to an empirically physically realistic system. Russell might not have liked self-reference, but physics is not committed to gratification of his conceptions or failures in conception. The likes of the works of Smullyan, both in some of his puzzles and in other publications, and of Priest in his paraconsistent logic, and more recent para-consistent disciplines under consideration, might be necessary in this field, largely obsoleting some of Russell's work.

Even in axiom structures as simple as those of Euclid, or their variants, one can create sub-disciplines by adding axiom subsets such as whether line AB can be defined as distinct from line BA or what "betweenness" of points on a closed curve might mean. Consider in ordinary Euclidean geometry, problems of tessellation, such as how many ways pentagons or Penrose "kites and darts" can tesselate a space. Such items have led to many refractory problems in formal fields alone. Why then assume, when once we leave the formal disciplines, that a single axiom structure could embrace all of physics, when physics embraces the concepts of information and thereby embraces mathematics?

Instead of pursuing this range of topics in this essay, I suggest that interested parties unfamiliar with the material, begin their search by inspecting links in Wikipedia that have "paraconsistent" in their titles, and exploring in https://plato.stanford.edu/ with search entries such as "Priest", "Smullyan", and "paraconsistent".

Even in formal systems such notions are open to debate. Consider the two basic sets comprising any algebra, for example: is the set of operations more primitive than the set of objects? Are such primitives even distinct? The field is intriguing, but I must abandon the most fundamental questions for lack of time, space, and my own mental capacity.

However, Hilbert hardly did better in this connection: in what I read, such questions are not clarified, partly because various fields had not yet emerged at the time he wrote. However, even a century later, it is unclear that we are better able to mend all the deficiencies. I increasingly suspect that the question itself is ill conceived in the form that he posed it. But I proceed on the speculation that we might profit from experimenting with some of the concepts that arise.

I do all the same insist that if Hilbert-6 is to be taken seriously, together with the axioms and algebras he conceived or implied, the necessary primitives, objects, and operations must in principle be definable, so I do discuss aspects at some length, however unoriginally I do so.

I suspect that "entity" might be a primitive, though perhaps we could define entity in terms of information, which also I suspect might be a primitive in physics, together with energy/mass and relationship/state. As I see it, information is any aspect of any state that distinguishes between possible states. An entity then is anything that can be thought of as a thing, in any sense that can be discussed in context. An entity might be atomic, such as perhaps a neutrino might be, or it might be purely notional, such as the concept of a unicorn, or it might be tomic (divisible, such as a diamond), or abstract (as a number, finite or infinite, or a rule) or collective (such as a cloud of condensed droplets, or a category of concepts) or an observation, such as a smell or a trajectory. In what senses one might call physically indistinguishable objects entities, such as electrons sharing an orbital, I leave as an open question, and a concept subject to context.

Emergence I see as a possible primitive. Now, emergence, as I use the term, I define informally for my own purposes, because, though I am convinced that my definition is sound and powerful, I am at odds with most of the works I have read, and the concept as I use it is important in our context here.

We encounter emergent effects and emergent entities wherever we put entities together into mutual relationships with each other, or commonly, even when we change the mutual relationships of entities by rearranging them, thereby introducing entities other than there had been before, or by introducing different entities into a relationship. We correspondingly may introduce new entities when we separate entities from each other, whether physically or conceptually.

Emergent effects, as I use and define the term are precisely those that we could not get without:

  • adding entities or putting entities together or separating them, or considering them together or separately
  • in general changing the interrelationships of entities in ways that produce effects other than the effects to be expected from those entities in isolation.
  • Emergents accordingly amount to a class of theorems in the algebra of physical reality, or possibly an extended level of axiom, much as we can create the concept of a line or a plane in geometry by adding as axioms the assertions or definitions that we can place a point that does not lie on a given point, or on a given line, or on a given plane, and so on, creating an emergent dimension each time.

What is more, we commonly get differently emergent effects by combining or splitting or rearranging identical sets of entities in different ways. In short the concept of emergence deals with aspects of:

  • the mutual relationships of entities and their combined effects in those relationships, and
  • the information that determines the relationships between the component entities and arguably also: 
  • the relationships between entities and the rest of the universe.

Note that all this has implications for the concept of existence as a primitive; the criterion for the existence of an entity could be that its presence places constraints on the behaviour, or even the existence, of particular classes of other entities. For example, a solid object existing in the path of another solid object will prevent it from maintaining its trajectory, and an electron with a known spin in a given orbital will constrain the spin of an added electron added to that orbital; these are things that non-existent entities could not do.

Now, physics in our world universally produces emergent effects of indefinite types and scales. Isolated water molecules have certain patterns of behaviour; multiple molecules close together behave as a gas or vapour, large numbers of water molecules in condensed contact behave in ways that cause emergent effects such as droplets or crystals. The behaviour of small droplets includes the emergent class of behaviour we call surface tension, that imposes roughly spherical shape, which dominates the behaviour of small droplets in isolation. The shape of larger droplets may be dominated increasingly by gravity. In really large masses we get oceans or (notionally) planets, and we deduce that the behaviour of water molecules at the core of a liquid planet of say, Jupiter mass, must behave differently, where extreme pressure would decompose molecules of H2O.

Increase the mass to stellar scales, and the emergent behaviour includes nuclear fusion.

How can I justify calling such changes emergent? 

By observing that in each case the behaviour on the next larger scale is not easily predicted by extrapolation from the previous, smaller scales. One can point out that a proton consists of quarks and gluons, but without their being combined into a proton, quarks and gluons are not protons and do not behave like protons. To ignore this would fall foul of fallacies of composition or division. By my definition therefore, protons, once assembled, are emergent. Add electrons, and we get emergent hydrogen, which we did not have when all we had was a cloud of protons: hydrogen atoms and molecules behave drastically differently from free protons and free hydrogen atoms behave differently from hydrogen molecules. More elaborate combinations of elementary particles might produce helium or carbon nuclei, and in these turn might produce large molecules such as fullerenes and diamonds. At each level we have more emergent effects.

Yet further emergent effects include planets, and occasionally, after a few billion years, life. Life produces progressive emergent effects, from pre-cells and prokaryotes, to eukaryotes and colonies, to metazoa and metaphyta, to colonies and societies, and still later, to intelligence of various sorts; each step emergent, each product placing constraints on entities at previous levels of emergence.

After that the complications grow. Everything so far happened either predictably from interactions of primitive components, or from the derived emergent entities: such derived emergent entities we might call meta-primitives. Meta-primitives emerge when less-derived entities interact in ways analogous to the way that the original primitives combined to form units that could be regarded as higher levels of primitives in their turn.

Now, there is less to this and more to this than at first meets the eye. In the proton we can recognise its component particles and their quantum interactions. In our hydrogen similarly, the component proton with its electron. Add more complex matter, all the way up to our largest atoms, and the same principles hold. In fact they still hold when we are dealing with molecules. To be sure, we see things that we might not have imagined when first looking at quarks and leptons, but we still see the quarks and leptons etc. They do not go away, and none of them behaves in any way inconsistent with their primitive nature (to the extent that we can guess at their being primitive of course).

And similarly, the hydrogens in a snowflake are exactly like the hydrogens in the solar wind, in ocean water or in an elephant; and their differences are in their circumstances, not their primitive nature; swap any two of those hydrogens, and the flake and the solar wind and elephant remain unchanged.

And yet that flake can be handled as a unit, an entity in its own right, a meta-primitive, to coin a vague term — the flake is not just the same as the same number of hydrogen and oxygen atoms, or even the same number of water molecules. I can take congealed lumps of a thermoplastic such as a polyethylene or polyester, and stretch them into filaments without changing either their ultimate primitive components, or their immediately proximate molecular structures; the stretching orients the molecules into extended parallel arrays. This produces remarkably different emergent characters in the material: the original lumps are isotropic, wax-like, behaving similarly no matter from which direction one approaches them, while the filaments are many times stronger axially in the direction in which they were stretched, and many times weaker at right angles to that direction. 

Emergently, they are radically anisotropic.

And still their ultimate primitive components have not changed one bit. Not their molecules, their component atoms of hydrogen, carbon, and oxygen, not their electrons. You might like to regard this principle as analogous to the structure of a quaternion or octonion, in which the structure has various attributes of its own, but all its terms still can be decomposed into real numeric values or variables.

This last point, together with the example of extra points permitting the emergence of extra dimensions, starkly illustrates another important aspect of emergence: it applies to formal entities as well as physical entities. From that we can infer that emergence is essentially a phenomenon of information in physics or perhaps physics of information.

Clearly the modes of interaction of combinations or structures that comprise emergence have attributes all their own, attributes that emerge from their relationships, attributes that may be different from, and often meaningless in terms of, the attributes of isolated primitives. And when we regard the emergents as meta-primitives, it is for the sake of the attributes of the meta-primitives rather than the attributes of the ultimate primitives that we conceive them. We do not generally regard a lump of polyester as being the same as a rope of polyester, any more than we regard a triangle in terms of points and lines, even though the concepts and natures of triangles and lines do emerge from say, Peano's geometric axioms concerning points.

Again, in a zero-or one-dimensional space, it is meaningless to speak of area or volume, or even to speak of zero-area or zero volume or of triangles, though in a three-dimensional space it is emergently valid to speak of the zero volume or zero area of a point or line.

To try to predict all the attributes of the meta-primitives and the structures finally derived, should be conceivable in principle, but as I shall demonstrate, it is not possible in general, not even in principle, and certainly not in physics. However, even before giving ultimate reasons, the difficulty of using our notional algebra of physics to predict physical outcomes may be informally demonstrated. We soon find ourselves relying on branches of algebra that are hardly comprehensible in terms of our ultimate primitives.

Another primitive might be event. I propose that an event is the application or the effect of the application of at least one operation in the algebra of physics on some object, some entity, in that algebra. I suspect that a universal, possibly logically definitive, attribute of an event must be an increase in universal entropy, but I am uncertain of this. One essential aspect of an event, possibly approaching the development of an axiom, that I propose is:

No matter at which level it occurs, any event occurs ultimately at the level of participating primitives; an event never violates a primitive. Any other event amounts to fiction or abbreviated characterisation or discarding of information. No love affair, or loss of a chess game, or quarrel about a spelling error, that leads to a world war, can change the charge on a single electron.

And if we combine events, we get processes.

These are just a few examples of entities in the context of primitives, and Hilbert-6 must account for them all in proper contexts, if it is to achieve a coherent objective algebra of physics.

 

Questions and question begging

"Who art thou that weepest?"


"Man."

"Nay, thou art egotism. I am the scheme of the universe.
Study me and learn that nothing matters."

"Then how does it matter that I weep?"
Ambrose Bierce

 

On reading his statement of his sixth problem, my first reaction was that Hilbert's timing was unfortunate. Though deservedly influential, and having great insight, he necessarily was a man of his time, and in particular at the time of writing he was without benefit of the advances that were to follow his 1900 list. Accordingly he wrote without the benefit of such topics as quantum theory, general relativity, Shannon-type information theory, or Prigogine's views on dissipative systems. And as I see it, one of the most unfortunate gaps in his arguments arose from the fact that he was not yet in a position to read the works on underdetermination that Duhem published at about the years 1905-1910, not to mention the subsequent remarks of Quine in the 1950s and sixties.

So far, nothing new, but as I read his problem 6, it seemed to me to involve a good deal of question begging in the form of assumptions concerning ill-defined concepts. I work from a translation of his problem 6:

The investigations on the foundations of geometry suggest the problem: To treat in the same manner by means of axioms, those physical sciences in which mathematics plays an important part; in the first rank are the theory of probabilities and mechanics.

 

Now, certain implicit classes of assumptions arise immediately; Consider for example:

  • We have at present no clear prospect of ever achieving strong proof of anything physical by scientific methods, either formally or ultimately, nor do we materially need such narrowly defined proof, as long as our predictive and explanatory powers and successes keep increasing.
  • The first assumption in Hilbert-6, as I read it, is of the nature of the universe in its relationship to our observation and comprehension. There are various ways to assign and interpret these, but I suggest that those most simply compatible with the wording of Hilbert-6 would be some versions or developments of what we currently call "scientific realism". These largely take the existence and physical nature of the universe for granted, and take for granted that what we observe of that universe, may be taken functionally as the basis for our interpretation of the universe itself, its underlying reality.
  • Personally I accept that assumption of reality as the basis for Hilbert-6, and for physical science in general. Still, I re-emphasise that science as we currently understand and practise it, whether or not according to scientific realism, cannot foreseeably prove the truth of any of our hypotheses about our universe formally; we can do no better than formulate hypotheses in the light of our observations, whether by deduction, induction, abduction, or superstition. The best evaluation of our success at any point, is the conceptual coherence and explanatory power of those of our views according to which we select our most cogent hypotheses as our strongest working hypotheses.
    We select our working hypotheses largely according to their success in prediction of future observations. We never know whether our repertoire of proposed hypotheses includes any that are ultimately correct; we might need to await future mental breakthroughs even to understand any ultimately correct hypotheses. If any of us could travel back in time to Newton's day to apprise him of quantum theory or general relativity, the time traveller might not find his recipient to be at all receptive.
  • Hilbert's wording: "... those physical sciences in which mathematics plays an important part..." suggests hedging. I suspect that he was beginning to realise that he was running into trouble with an earlier, more robust, version of the idea. There is no branch of physics in which mathematics is unimportant; one might almost as well speak of "... those branches of arithmetic in which mathematics plays an important part...". In physics any work that is based on flawed mathematics is unavoidably invalid to that extent at least. So I insist on treating Hilbert-6 as dealing with all of physical science. Having affirmed that, I note that I will have some very damaging observations on such points as underdetermination.
  • For Hilbert-6 to be meaningful, any physical science it deals with or expresses or supports in axiomatic form, must conform empirically to prediction and observation. If it cannot conform to the empiric world with sufficient precision and probability, then how could any such project as Hilbert-6 be of greater interest than any other abstract formal thesis? It would have no cogent force in any physical science.
  • Wherever we encounter particular classes of antecedent matter, information, or explanation in physics, and accordingly in physical sciences such as biology, concepts of implication or consequence arise. Such concepts include causality in terms of causal chains or webs of events, though they need not imply determinism. The concepts constrain the classes of material outcome that may follow particular antecedents. Far more weakly, they constrain the classes of antecedent events that eventuated in particular observed outcomes. It is for example easier to produce a mushroom cloud from a suitable ball of actinoid metal, than to compress a mushroom cloud into a ball of actinoid metal, or in general to decrease the net entropy of the universe.
  • From such antecedents in turn we conceive assumptions concerning such concepts as time and dimension, their existence, nature, and meaning. However, whether such things truly are primitives, or what the ultimate primitives of physics might be, is a problem beyond me. I do however insist that if anyone were to assert: that no such constraints in physics exist, or that ultimately no such thing as existence necessarily must exist (or can be meaningful); then such assertions would imply that nothing would have any meaning anyway, including those self-same assertions, which accordingly we might confidently ignore. In particular, I suggest that, to be of cogent interest, Hilbert-6 axioms or axiomatic structures, should in principle at least describe and constrain such relationships, if not predict and explain them.
  • In mathematics, logic, implication, cause, and related topics, if we cannot assume that particular assumptions entail constraints on the nature of the classes of conclusions that can be drawn from those assumptions, then we need not concern ourselves with mathematical or logical operations at all, because they could tell us nothing more than we had begun with. They would be meaningless, neither true nor false, nor of predictive nor analytical power. This should apply in Hilbert-6 as much as anywhere else.
  • The necessary nature of the mathematical or logical discipline intrinsic to Hilbert-6, necessarily must be applied in their nature. By that I mean that they refer to something outside themselves. In that sense the concepts or consequences of Hilbert-6 cannot be either value-free or universe-independent. In contrast to this concept of applied disciplines, we have formal mathematics or logic: formal disciplines need not comprise anything but their own axioms, assumptions, and theorems. There is little constraint on the developer of a formal axiomatic structure: the choice of axioms and assumptions is almost arbitrary; it must not be internally inconsistent, and preferably its axioms should be mutually independent. Critics might demand that the structure also should be non-trivial; it should lead to something of interest; but that is nearly all one can adduce in criticism of a purely formal axiomatic structure. The demands on Hilbert-6 are more comprehensive.
  • G. H. Hardy notoriously claimed that applied maths and "pure" maths had nothing to do with each other, but I insist that this was muddled thinking on his part: he was confusing the subject matter of the respective disciplines with their objectives and activities. The two fields of activity and intent might have little in common, but the subject matter is effectively identical, though with one reservation: in applied disciplines such as physics, there are all the axiomatic constraints of formal maths, plus the added requirement that the relevant axiomatic primitives, precision, and conclusions must match the relevant subject's primitives, processes, and outcomes in the relevant contexts.
    For example, if our axiomatic structure that we apply to electric charges and fields, turns out to assume or conclude that like charges attract and opposite charges repel each other, then we cannot seek refuge in the freedom of the axiomatist to choose or apply his axioms at will. The subject matter, such as pith balls or electrons, will fail to behave in accordance with conclusions validly drawn from the axioms. As in the case of empirical physics, so in the case of any form of applied axiomatics: the choices of primitive entities and operations are crucial. In fact, if their implications do not match the evidence for the behaviour of the subjects of study, that very fact qualifies as justification for rejecting those axioms or assumptions.
  • Hilbert never made it clear how true to empirical, practical, reality he wanted his axiomatisation of physical science (or "sciences") to be. If he simply wanted an idealised, unrealistic formalisation in which we could arbitrarily ignore inconvenient details, this could simplify matters, but if he was envisaging something comprehensive and of fundamental interest, he seems to have fallen short. Consider for example his statement:
    "...one might try to derive the laws of the motion of rigid bodies by a limiting process from a system of axioms depending upon the idea of continuously varying conditions of a material filling all space continuously..."
    Such assumptions are convenient in mathematics, but in physical fields they are no more than "innocent fictions" to meet practical needs; they have nothing to do with the axiomatic validity of the field. In physical reality, as physicists and engineers should be vividly aware, a rigid body is a myth, as much so as is a physical infinity or a point or a line or Euclidean geometry in general, or Newtonian mechanics. To be sure, one might be able to base a formal axiomatisation on such fictions, but in what ways would axiomatisation of a fiction be of special interest in mathematics, or of philosophical interest in a material field of science?

 

 

Axiomatics and assumptions

You can’t proceed from the informal to the formal by formal means.
Anonymous

You can’t proceed from the formal to the informal by formal means.
Anonymous

You can’t proceed from aphorism to proof by aphorism.
Remark on the preceding two aphorisms

 

Authors of papers in the field of axiomatics vary greatly in their assumptions, views, and conclusions; some are downright eccentric. For example we find (https://doi.org/10.1098/rsta.2017.0224) assertions such as that any full mathematisation of the physical theory, including its axioms, must contain no physical or empirical primitives. To assert this is to confuse formal mathematics or logic with applied mathematics, as I distinguish the two. The fact that it is possible to formulate an axiomatic structure based on primitives that have no objective meaning does not imply that it is invalid or undesirable to formulate axiomatic structures based on primitives that do have objective or empirical meaning, nor need it imply that to do so is intellectually or scientifically unacceptable, let alone undesirable.

In fact this might be seen as one of the fundamental differences between applied and formal disciplines. It is not clear to me how applied disciplines could in practice exclude concepts intrinsic to the subject matter. Possibly paradoxically, this applies whether the subject matter of the applied discipline is itself formal or material. For example, one might apply methods of empirical research to the study of the distribution of primes or prime pairs, or to the occurrence of odd perfect numbers.

For my part I flatly reject as irrelevant as well as artificial, the concept of the general exclusion of empirical considerations from axiomatics; how far such exclusion is practical or desirable, depends on the field and mode of study.

As will be plain from the section on underdetermination however, to include physical primitives carries its own hazards and limitations. One must take them into account in any project to deal with any axiomatic structure that includes any concepts beyond purely formal entities.

I furthermore reject as irrelevant as well as artificial, the concept of proceeding from the formal to the informal, unless the procedure can be shown to be at least as comprehensive, and as predictively adequate in terms of validity, as empirical derivations. String theorists please note.

In considering the possible significance of Hilbert-6, several largely incoherent thoughts arise on how mathematics and axiomatics might apply in physics:

  • The mathematical formulation could be directly isomorphic with a physical system — notionally a direct description, abstraction, or model of the system its information content describes. It is not clear to me whether there are real-world examples of this kind, but possibly aspects of some quantum or relativistic systems would qualify.
  • The mathematical formulation and the physical system it represents very likely would not be fully logically isomorphic, but, for reasons of convenience or practicality, we might ignore relatively minor influences or terms in the calculation of the system's behaviour. This is what we do in applied mathematics or practical engineering. For example we do not usually allow for the gravitational or tidal effects of Andromeda or Sirius on the trajectory of a satellite, or in the calibration of a weighing device, even if the rest of one's work is indefinitely precise. Dealing with such situations might be one field in which Hilbert-6 could lead somewhere, though I suspect that we first would have to learn to develop some sort of axiomatisation of plesiomorphism.
  • Thirdly, the mathematics might be essentially fictional, such as when a given formulation is hypothetical, perhaps a working hypothesis. The mathematics might be convenient, eventually adopted as "true", later to be ousted by a rival theory. Think of the parabolic curves of thrown missiles, or the idea of circular orbits of planets, or of epicycloids, since replaced by Newton’s elliptical and hyperbolic orbits, and later by general relativity.
  • Again, without pretension to theoretical or mathematical validity, one simply might fit one's construction, measurement, or description to a convenient standard. As an illustrative example consider the use of French curves in data fitting (see https://en.wikipedia.org/wiki/French_curve ).
  • Successive steps of organisation from primitives to organised assemblies to abstractions are easier to retrodict than predict. This is not because prediction is logically impossible, but because retrodiction offers a wide choice of ways in which a particular observed type of thing might have happened, even if most of them are unknown, whereas practical prediction offers an indefinite number of ways in which they might happen, with little indication of which are operative, and many or most are unknown.
  • When axioms and data are insufficient to support confident prediction, then abduction, hypothesis, and induction are appropriate, followed by prediction to distinguish stronger working hypotheses. A naïve hypothesis might be that a mass of water in the air will fall, and yet we find that if it is suitably dispersed like a cloud, it may float.
  • "Laws" of nature, reality, or "science", are generally fiction, deduction, generalisation, speculation, abduction, or abbreviation. They are not usually primitives or axioms, but may be theorems in terms of Hilbert-6.
  • Abstract creations such as proverbs or rules for games are arbitrary reductionism where we omit inconvenient alternatives. Other conventions, such as languages, are in some ways similar.
  • Qualitative distinctions are arbitrary constraints on acceptable alternatives, when we can get conceptually similar results in more ways than one. A sword or a rock can balance a scale as well as a standard weight, and we can achieve transport by land, sea, or air, by foot, wing, or wheel, but we constrain our choices according to extraneous considerations.

On the physical nature of formal disciplines

Let us assume a spherical cow
Anonymous

To exclude material axioms and primitives from the development of axiomatic structures, is in itself deeply illogical. Personally I view mathematics, logic, and all other more or less formal or abstract disciplines, as branches of the field of study of physics rather than being no more than tools. Nearly everyone to whom I have proposed this concept, persists in confusing it with the fact that to calculate any quantitative physical result, we perform calculations or formal derivations.

Yes, we do, and yes, we do use mathematical calculations as tools, but that is a red herring; we also use a hammer in forming steel to make a new hammer, but that does not imply that the concept of using a hammer to make a hammer is circular. The operative point remains: that information is a physical concept, as much as energy, entropy or any other physical concept, and has its relativistic equivalence to mass/energy. (see: Melvin M. Vopson, The mass-energy-information equivalence principle. AIP Advances 9, 095206 (2019); https://doi.org/10.1063/1.5123794)

One cannot materialise, store, transmit, or manipulate any abstract or purely formal concept, neither a number, a variable, a relationship, a word, a calculation, an image, a graph, an idea, nor anything at all, without information or without the manipulation of information. A mathematical representation, derivation or algorithm cannot meaningfully be real without information or information processing. And you cannot have or process information without a physical medium and physical reality, whether ink, paper, photons, sound, neural tissue, or something physical. And as such, information itself has a mass equivalent, by which I am not referring the mass of its medium such as paper or ink or photons or coordinates or neurons, but to the thermodynamic content of the internal relationships of the system that contains or embodies the information.

From this point of view the categorical exclusion of the concept of a material object or relationship as a primitive in an axiomatic structure is arbitrary, as well as irrelevant. Not even the most abstract or formal entity is independent of physical constraints, or representation or implementation in a physical medium. Nor does it follow that there is any special merit to puristic formalism and exclusion of meaning from an axiomatic structure; the internal consistency of notionally formal axiomatic structures is no more necessarily valid than physical assumptions such as of the existence of physical relationships or gravitation. The history of the parallel postulate, and developments in axiomatics during the last century or two, illustrate that it may not be trivial to prove that one's purely formal axioms are independent, necessary, sufficient, or consistent.

Just as one must be prepared to adjust one's assumptions in applied disciplines in the light of intellectual or empirical advances, so one must accept the need for caution in avoiding dogma in one's formal structures; no magic guarantees that, having formulated an axiomatic structure, one has a guarantee that it is possible validly to derive all possible, notionally accessible and correct theorems from it, nor avoid all possible false conclusions or conjectures.

And this goes beyond Gödel's undecidability; consider for example some simple problems arising from elementary Euclidean geometry: try using Euclidean axioms to tackle the tessellation of the plane with various polygons and polyominoes

(see https://www.quantamagazine.org/marjorie-rices-secret-pentagons-20170711/
or https://en.wikipedia.org/wiki/Penrose_tiling
or https://en.wikipedia.org/wiki/Polyomino )

Whole classes of such problems have guaranteed algorithmic solutions, except that not all those solutions are guaranteed to terminate within the constraints of our observable universe, if at all. A similar problem is say, fully general angle trisection within the Euclidean axioms: in this example it can be shown to be impossible in the sense of absolute correctness, but mathematically there is no limit to how close one could get in a notionally finite time. But in the sense of an algebra of physics as I describe it in this essay, even limiting oneself notionally to Euclidean construction, one soon would fall foul of Planck's limit, in that the residual error would be literally unmeasurable.

Of course, some axiom structures other than those of Euclid do notionally permit precise angle trisection, but that is another matter; all that those achieve, is to permit one to approach Planck's limit faster and in a different sense. One would always have a residual error even if unmeasurably small.

 

On empirical physical science as an algebra.

If you don't know how to do something,
you don't know how to do it with a computer
Anonymous (but admirable)

Let us again consider the fundamental nature of an algebra. As I see it, any algebra may be described or defined as a set of objects (primitives?), plus a set of operations that may be performed on those objects. For any particular algebra any definition of the nature of its objects and operations is for the underlying axioms to define.

I conjecture that for any algebra to undertake to produce results or theorems that cogently describe or draw conclusions concerning physical systems, it must include assertions concerning that physical system in its axiomatic structure.

Again, let us consider the nature of physical reality, particularly in terms of the various versions of the philosophy of scientific realism: from certain points of view, one might adopt as a working hypothesis, that the observable, empirical universe consists of objects — entities either elementary or complex, or local or extended or distributed — objects that act and interact in various consistent ways that are consistent with the axioms of the algebra of the physical system. Other schools of philosophy may have different views, but if any of those affect the outcome of our impression of the world we seem to ourselves to find ourselves in, then I fail to think of an example.

Now, at each action or interaction of physical entities, the various objects behave according to the attributes of that particular interaction. Interactions of such types could be expressed as algebraic operations performed on the objects.

In other words, in principle one could characterise the physical world as an algebra embodied as a physical reality, much as we could regard a calculator, whether mechanical or electronic, loaded with a particular entry, as a physical embodiment of an arithmetic operation. The fact that some philosophies might reject this view, is irrelevant in context, because what we are considering is Hilbert-6, and this physical-algebraic perspective matches it neatly.

There is of course no notional limit to how large an axiomatic structure might be; nor does it follow that every theorem in an algebra must depend on every one of the axioms in that structure; only that no theorem derivation may introduce new axioms ad hoc. We also prefer an axiomatic structure to have axioms as few and as simple as may be adequate for its physical or formal applications. In practice we tend to create an axiomatic structure to deal with a limited field, such as Euclidean geometry, but, as I read it, Hilbert-6 is ambiguously worded; it might equally well refer either to a global axiomatics of physical reality, or to a set of independent axiomatic structures, each dealing with one branch of physics.

Now, we have not yet definitively identified the primitive elements of our physical existence, the basics of our universe, but the current status of quantum theory versus relativity notwithstanding, we have good reason to doubt that there is any one way of cleanly separating any one branch of physics, or of physical reality, from all others. This suggests that if we read Hilbert-6 as referring to multiple separate axiomatisations of independent branches of physics, that would trivialise the problem artificially, even unnaturally. If on the other hand we see Hilbert-6 as referring to axioms for physics as a coherent discipline, a global representation of reality, this could imply only the start of an indefinitely large-scale project, in fact one that awaits the development of at least some approach to some form of TOE or GUT (“Theory Of Everything”, or “Grand Unified Theory”) before we even could define the objectives adequately.

It seems to me that such a project would have to begin with a fundamental, coherent structure of axioms, some formal, and others empirical. I suspect that this structure would comprise a fairly small set of axioms, possibly dozens or thousands, dealing with information, mathematics, aspects of the physics of space, matter, dynamics, mechanics, and cosmology, but with millions or more of theorems that would in turn amount to axioms in derived aspects of reality, such as philosophy, sociology, natural selection, tessellation, chemical bonding, languages, and literature.

Anything that could not be shown to arise from the fundamental axioms either would be poorly defined or meaningless, or would indicate regions of necessary research — much as blunders, delusions, unproven theorems, conjectures, and unprovable truths occur in our science and philosophy today.

Of course, this speculation amounts to unsupported hand-waving on my part, but I don't know what to do about that.

On mathematics and axioms as applicable to physical reality

There are no whole truths: all truths are half-truths.
It is trying to treat them as whole truths that plays the devil.
Alfred North Whitehead

In initiating the axiomatisation of empirical physics from the mathematical point of view, we immediately encounter what is simultaneously a temptation to practical simplicity, and to simplistic delusion: in constructing axioms we might neglect problems of determination, and too carelessly assign attributes or identities to objects; and one result is that we wind up with imperfect analogies posing as definitive bases for derivations and proofs, instead of convenient, but limited or misleading, conceptual fables.

Consider for example our classical example of an axiomatic structure applicable to the real world: Euclidean geometry. It is an example of a world view based on concepts such as continua, lines, and points, none of which objectively exists. Originally at any rate, Euclidean geometry was seen as being based on axioms and concepts that were literally and obviously true. Nowadays we do not see it that way at all, but in fact our assumptions of its applicability, though facile, are in practice as useful as any hypothetical, ultimately true, physical axiomatic structure could be, and probably a good deal more usable.

But Euclidean geometry, like any other axiomatic structure based on continua and points, is a convenient fiction, and could never be precisely and literally true; consider:
Euclid implicitly assumes that the measure of a point in any dimension is zero; not negligible, not negative, not Planck-scale, but exactly zero. A point has unique coordinates and nothing else. And yet, nowhere in a line or any other continuum, is there a coordinate set that does not match a particular point, defining it uniquely. Of course, ever since Planck, if no earlier, we have known that there is a limit to how small an entity can be measured to be, which limits the precision with which we can define any particular point, but please note that my argument here does not rely on quantum theory — in principle the following argument was as accessible in Euclid's day as in ours.

From definition of a point in terms of coordinates rather than physical content, it follows that a point's coordinates are unique: any and every point with exactly the same coordinates is exactly that same point and no other, whether under the same name or not, and no matter how it is defined or calculated.

Simple of course, but that assumption leads us into various traps. The conclusions are valid enough in purely formal axiomatic structures, but logically, they conflict drastically with the physical systems that we apply them to. For example, Euclid assumed that we could select a point arbitrarily on an arbitrary line, and for instance draw another line through that selected point.

We cannot.

To do so would require infinite information, both in defining the point in the first place, and in identifying it once more so that we can pass a line through it, let alone defining a line to pass through more than one selected point. What we call a point in practical reality, is a patch of ink or similar material; it has three dimensions (four if you count its world line), and the line we draw is a more elongated patch. We also encounter paradoxes, such as that if we remove the point from the end of a closed interval on a line, the resulting open interval has no terminating point because it follows from the Euclidean axioms that no point has an immediate neighbour that is not the same point. Something of the type might make sense in axiom structures that do not hypothesise a continuum, say lattice theory, or something similar, but, to the physical world, the relevance of anything like that is not immediately clear.

Axiomatically, such assumptions are, formally unexceptionable, but physically nonsensical; in other words, they are the wrong axioms for a physical system. They serve well enough as, not so much a picture, as a geometrical caricature for application in carpentry or metrology, but they do not match the nature of the objects under consideration.

What all this amounts to is that our working axioms for dealing with physical reality are simplistic; this is harmless for ad hoc purposes, but it is fraught with traps when dealing with explanations and predictions.

The implication is that if Hilbert-6 is to deal with anything like a realistic axiomatics of physics, it will have to address concepts such as the consequences of the nature of information, and that in turn leads us into problems of determination, as I now discuss them.

 

On physical axioms and underdetermination

I know what you’re thinking about,’ said Tweedledum: ‘but it isn’t so, nohow.’
‘Contrariwise,’ continued Tweedledee, ‘if it was so, it might be; and if it were so,
it would be; but as it isn’t, it ain’t. That’s logic.’
Charles Dodgson Through the Looking Glass

Once one has established a purely formal axiomatic structure, that structure may be regarded as self-contained and carved in stone: one might discard it later as being variously unsatisfactory, but it makes no sense to say it is wrong or meaningless — just inappropriate to one’s requirements. However, in establishing an applied axiomatic structure, at least some of the primitives must be assumed or shown to have meaning in the field of application, and one cannot always exclude the possibility of the need to change the selection or definition of some of those primitives in the axioms in the light of future discoveries. In this way too, Hilbert-6 seems to me to be out of line with the rest of his twenty-three-odd problems.

As I have mentioned, I assert that it is perfectly valid in principle for an axiomatic structure to contain physical or empirical primitives. However, doing so entails important classes of consequences, and, as I have hinted, for Hilbert-6 to be taken seriously, one major concern is the concept of underdetermination.

Underdetermination has various forms and contexts, as one may read in various online resources, but in this sense I consider mainly the problem of deciding which primitives to select for inclusion in an empirical axiomatic structure. Such structures might be fictions for applying logic with limited precision to suit a range of applications, such as in first-year physics without general relativity or quantum theory, but, though he never stated any clear objective for Hilbert-6, the impression that Hilbert gave is that he envisioned something like the equivalent of a formal system in terms of necessary assumptions in the axioms, and conclusions arising from those axioms.

Underdetermination is variously and loosely defined by various authors in various contexts. Here I use the term to refer to the fact that any hypothesis in scientific practice involves more fundamental assumptions than one generally takes into account. If we try to apply Ockham’s principle, we inevitably must make assumptions about which entities to neglect. And our assumptions cannot be independent of our preconceptions, whether they are partly right or seriously wrong.

We might illustrate this with the black swan principle; a few centuries ago, the term “black swan” was a term for contemptuous dismissal of something obviously nonsensical — everyone knew that swans were white; it was a law of nature. However, that assumption was unsound, and we now know of the existence of black swans. We have discarded an assumption that had led us to underdetermine the nature of swans.

It does not follow that every conceivable assumption deserves equal attention. Consider finding a coin on a billiard table: its presence there is undisputed, but our assumptions for how it got there are seriously underdetermined. Was it tossed randomly? Put down as a signal, heads or tails? Did it miraculously materialise there? Did it land there as a hummingbird, and then turn into a coin?

Most of us would reject the last two out of hand, on the basis of assumptions rejecting apparent miracles. So far so good for practical arguments, but it is never possible to deal with every possible unreasonable hypothesis. And many hypotheses now seen as unobjectionable, were at first assailed as nonsensical. Tomic atoms, elliptical orbits, non-existence of ether, quantum mechanics... We simply cannot avoid underdetermination by eliminating all possible incorrect hypotheses individually.

Even more difficult to eliminate, are the reasonable, but mutually incompatible, hypotheses. Sometimes the determining information simply is lacking. Sometimes we can do more research to gain sufficient information to determine the issue, or at least render a particular hypothesis overwhelmingly probable.

For example, if the coin were seen to be standing on its edge, that would strongly favour its having been put down carefully, not tossed.

Those are details; the essence is the nature of determination. As a matter of common sense, in defiance of underdetermination, we may rank our hypotheses according to our current research and information, but as a matter of formal logic we must accept that neither science nor common sense has much to do with formal proof rather than the ranking of hypotheses. In an underdetermined world we generally cannot prove formally that our posited hypotheses include one that is relevantly close to truth.

Note incidentally, that this is not a relevant criticism of the validity of science as a field of activity in generating and evaluating hypotheses. It does however present serious difficulties for any attempt to axiomatise science.

This returns us to something like the situation of Euclid with his concept of obvious, necessary truths. The geometers of his day had made certain observations and proposals, and had nominated their primitives accordingly. This was reasonable, and such procedures in fact established their leaders as giants in the early history of mathematics.

It did not ultimately establish their logical structures as true or compelling however, no matter how useful or ingenious the axiom structure. And if Hilbert thought that his proposed fields of applied physical axioms would lead to anything better than Euclidean or even Newtonian views of truth or cogency, I think he blundered. If on the other hand, as an academic exercise, he wanted no more than to establish purely formal mathematical structures, such as a mathematics of notional probability, structures that we could apply in various fields of science wherever appropriate, then it is not clear how these structures differ from other axiomatic structures in formal mathematics, or how we would apply them empirically, or establish their empirical limits.

But one thing is certain: whatever his intentions, there is no way currently in prospect at the time of writing, by which we can establish definitive physical primitives, either by hypothesis or observation, and this has drastic consequences for the project as I see it.

There also is no way by which we could apply a formally hypothesised and developed axiomatic structure as being a necessarily or definitively true description of a physical system, even in its own terms. In this respect a physical system differs from formal systems, in which the axioms are by definition unassailable, whether meaningful or not. Such an attempt at physical axiomatisation falls foul of traps of the same type as those of Hooke's law: it is useful certainly, but only roughly, and only within what it pleases us to call "elastic limits".

There is no way either, in which we can establish an objective primitive from empirical observation, no matter how precise and correct, and no matter how convenient in the context of our axiomatic structure. Underdetermination is pervasive. Consider the existence of an entity such as an apparent particle that appears to us as a candidate for the status as a primitive. We knock it about for a bit and fail to split it or demonstrate any structure within. We therefore assume it is primitive, and we find nothing to show that we were wrong, and some of us even propose assumptions based on the hypothesis that a primitive with its observed attributes could exist. We perform some hair-raisingly expensive experiments, and nothing falsifies our diagnosis of primitivity.

That could describe our perception of protons and electrons and much of the notorious "particle zoo" till well into the 20th century. Then in the 1960s quarks emerged and many previously putative primitives no longer qualified as primitive. Nowadays we are left with only a job lot of quarks, leptons, and elementary bosons that look as though they might be primitive, but their primitivity is not yet established and many questions still are outstanding.

Then, though it still is uncertain, suppose that our currently recognised particles really are primitives. Then they and their attributes fit well into our structure of axioms that constrain the nature of our world. But then our assumptions of the truth of our axioms are no more compelling than any physical hypothesis. To leave the truth value of a formal axiom undefined is unobjectionable, but for an applied axiom it is unacceptable: it reduces such an axiom to the status of a convenient conjecture, or an outright speculative fiction; the whole house of cards collapses. This is a strange status for an axiomatic structure; if we do not know the true nature of our primitives, then we can do no better than speculate on the predictive hypotheses we base on them.

And we cannot even tell it is a house of cards except by showing that pivotal classes of our axioms or assumptions of the nature of the primitives either are wrong or at best incomplete.

To be sure, we could build axiomatic structures such as Newtonian mechanics, based on what we now see as largely amounting to fictions, fictions that we find useful in much the same way as we illustrate legal or moral principles with the aid of parables or stories from history. But if that sort of thing is what Hilbert intended, he did not make it clear. If he had truth-valued axioms in mind, then we have some serious mental obstacles to negotiate. And without better tools than new generations of yet larger hadron colliders, for examining the behaviour of our putative primitives, we may be nearing the end of our tether.

Nothing can guarantee that any particular effect that we have as yet observed, could only result from one unique permutation of primitives, or that, if there is in fact more than one possible permutation that could cause it, none of them will unexpectedly show any extra behaviour in our future research. Such discoveries are routine in science, as Hilbert well knew; he mentioned it in his statement of Hilbert-6. He did not however suggest any approach for dealing with it.

Axioms” (assumptions) based on illusions concerning physical primitives, would amount to nonsense, possibly dangerous nonsense. The history of nuclear power and nuclear weapons vividly suggests just how dangerous.

Nor does the problem of underdetermination end there.

Just as radically different classes of notional primitives might manifest misleadingly as being the same thing, notionally identical states or classes of situations can be the result of different chains or webs of causal events.

Writers of the calibre of Tolstoy were never tired of asserting the nonsensical view that the scale of human history is too large for the individual to influence its broad trend, and yet it is trivial to demonstrate that it is extremely difficult to specify any event so small that it cannot in principle affect anything from the chipping of a fingernail to the destruction of a habitable planet.

Conversely, a short term observation of a simple situation cannot be determined to have resulted in only one way. We find a coin in a path: was it tossed, dropped, lost, placed, left deliberately, as a bribe, a signal, a message...? What effective axiomatic structure would help us in dealing with a universe that exposes us to such radical underdetermination?

And if Hilbert intended nothing so ambitious, and yet nothing so non-functional that it would predict nothing and characterise nothing in an underdetermined universe, then what exactly did he intend to suggest his axioms to achieve or permit?

Randomness and information

Thus it seems Einstein was doubly wrong when he said, God does not play dice.
Not only does God definitely play dice, but He sometimes confuses us by
throwing them where they can't be seen.
Stephen Hawking

Although nothing in what I say in this section has to do with Quantum Theory, it also does not clash with Quantum Theory in any sense that I recognise.

To describe, identify, locate, or steer anything, we need information. And the more precisely we wish to perform such functions, and the smaller the acceptable deviation from "reality", the more information we need. For zero deviation, we logically would need infinite information in some sense, and given the physical nature of information, there is not scope for a physical infinity in our observable universe.

Well then, in locating a point, any specification necessarily leaves room for a certain lack of precision. The naïve view of such a process used to be that reality actually is precise, and that random outcomes, for example in tossing a coin, reflect nothing more fundamental than our ignorance of the input details and the process. However, it is a simple matter to devise systems in which arbitrarily small factors suffice to disturb outcomes arbitrarily drastically.

Whenever we have an example of symmetry breaking, where no external factor causes a biased outcome, the outcome is random in the sense of all outcomes being equiprobable, or biased on a scale to match the scale of the bias. The source of bias amounts to a source of information: it constrains the range of possible outcomes.

Consider as an illustration, two perfectly round steel balls, with zero angular momentum, with a high coefficient of restitution and negligible coefficient of friction, perfectly clean, in a vacuum in a perfectly symmetrical gravitational field. Consider dropping one ball precisely concentrically onto the other on a perfectly flat, level, rigid surface. According to Newton's F=ma they must continue to bounce till the dropped ball finally comes to rest on the one below. This is one of the consequences of the nature of a number of the objects and operations in what I have called the algebra of physics: steel bouncing off steel and so on.

But one item in such an algebra is that when a ball repeatedly bounces off a ball, then any non-zero eccentricity in the contact repeatedly increases rapidly with repeated impacts. In other words there is exactly one mathematical point of impact and one trajectory that will keep them bouncing one on the other. In reality we would do well to get them bouncing even twice, let alone bouncing repeatedly till they come to rest.

This remains true whether we appeal to quantum theory or the crystal structure of the balls or not. We cannot in practice undertake to aim the balls to an error much smaller than Plank's length, but even if we could, even if we ignored atomicity, and somehow accidentally managed to hit the lower ball with precisely zero error, it would require infinite information to ensure that the next bounce hit the same point of content with zero error.

And we cannot have infinite information...

There is a direct connection between information and randomness in any form. The most popular concept of randomness is that we do not have access to the information that determines a given situation or outcome. For example, given sufficient information, if we put a fair die into a fair box, shake it fairly, and cast it fairly, it is in principle possible to predict how it will fall, but in practice that possibility is not accessible to the observers, and generally not to the one who throws the die either. Even if the one who throws the die does so by operating a special machine, that throws the die deterministically, but according to an unknown algorithm, such that for each throw, the full information determining the throw (one digit to base six) exists in the machine, the effect on the watchers is exactly the same as if no such information existed at all.

However, our interest here is in the opposite kind of randomness. This kind is well recognised in quantum theory, but as I have instanced, it also would have to exist in some respects whether quantum effects applied or not. This is the case in a situation where the information that determines the outcome of an event does not exist at all, not in our minds, nor our calculations, nor in nature itself. It might never exist, or it might come to exist to some degree in the form of increased entropy.

That quantum form is most conveniently observed in effects such as radioactivity; it might be known that say, an isolated atom of Thorium-231 would decay by either alpha or beta decay, but we can only guess which, and our guess at when it will decay is conditioned only by the observed half-life of Thorium-231 being about one day. What is more, our ignorance in anticipation of the outcome is shared by "Nature"; the outcome simply is no better predetermined, than whether a given photon will pass through a partial reflector.

But let us ignore quantum theory, and imagine ideally hard needles that we may describe as highly prolate, perfect spheroids. We also imagine a classical device that tosses such needles onto a perfectly flat surface in the absence of perturbing influences. There is no mechanical basis for denying that some spheroids might land balanced on end, remaining there indefinitely.

At the same time, such a spheroid and such a flat surface are geometrically convex at all points, so the point of contact of is mathematically a point. Balancing a spheroid on any other mathematically unique point would break the symmetry and the needle would topple in the direction of that point.

In practice of course, Brownian motion would suffice to break the symmetry almost at once, and the atomic structure of any material is far coarser than the model assumes, but I refer here to the far smaller, subtler question of non-existent infinite information; hence the simplistic nature of the model, which I hope you will excuse.

Granting such considerations, if the contact were a point, the definition of such a point, its very existence, would require infinite information to specify its coordinates, and its maintenance from instant to instant would require equally infinite information. I suspect that algebraically the infinity in question would exceed aleph-null, and I guess that it would be of the order of aleph-2. But it is not clear to me that that would make any material difference.

Two things would result, sooner rather than later:

  • The point of balance would no longer be in the axis of symmetry, and the needle would topple
  • In the absence of external influences, it would take infinite information to predict the direction of fall; so the direction of the breaking of the symmetry could not be predicted by any information in our universe.

Similar notional models are easy to propose and tune to taste, but the essential point is that for any model, whether as unrealistic as the foregoing needles, or strictly practical, indefinite prediction requires indefinite information, and, prior to any event, some information is lacking, non-existent, so some genuine randomness is intrinsic to every physical event.

Fundamental randomness arises from fundamental non-existence of information, not from ignorance of existing information.

It also is easy to demonstrate that there is no realistic limit in principle, to how large an effect can result from an arbitrarily small non-zero-magnitude event.

If Hilbert-6 is taken to intend the generation of a class of comprehensive axiomatic physical algebras, then any viable axiomatic structures will have to include considerations such as information and determination.

How much axiom is enough?

Everything should be made as simple as possible, but not simpler.
Albert Einstein

The very concept of axioms was one of humanity's great intellectual breakthroughs. And yet I doubt that the principle plus its implications and limitations are fully understood yet: I for one certainly do not fully understand them, and I do insolently assert that persons who do claim full understanding, loudly and slowly explaining to all hearers how simple it all is, are deluded.

Even more than demonstrating anything that we could call the "true intrinsic nature" of axioms, there are whole classes of difficulty in dealing with the application of axioms to conceptual problems, whether purely formal, or partly empirical.

Although not a logical necessity, parsimony should be obvious as one virtue in any axiomatic structure, whether formal or empirical; one wants as few axioms as may be, and we want them to be as simple, atomic, and mutually independent, as possible. Wherever it is possible to show that a simpler structure is logically and practically equivalent to a more complex version, one almost invariably prefers the simpler version.

Unfortunately simplicity itself is anything but simple. Consider for instance the definition of an algebra as a set of objects (primitives) plus a set of operations: notionally we could reduce the complexity of an algebra by reducing the domain or complexity of the objects that it deals with; or instead we could reduce the number or complexity of the operations available for dealing with the recognised classes of objects. If we reduce both, that only improves the simplicity if in doing so, we avoid reducing the power plus usability of the residual algebra — if we do not care about loss of scope for dealing with problems, we could preen ourselves on having achieved perfect simplicity when we have excluded all the objects and all the operations in the axiomatic structure.

Concepts of power and usability raise further concerns, although not so much for formal users; for them any finite line of proof or derivation of a theorem is adequate. For applied users on the other hand, it is a different matter. Given alternative axiomatic structures of equal ultimate logical adequacy (irrespective of whether they differ in their respective assumptions and primitives) one's initial choice of the working hypothesis could well be affected by the modes of derivation and length and complexity of any projects within the distinct structures.

Even for fields as spare as abstract number theory or Boolean logic, the questions of their simplicity leave ground for passionate and persistent controversies that need not concern us here.

Now, in purely formal axiomatic structures the role of axioms is fairly clear: given the axioms, any statement formally derived from those axioms is a theorem. Any statement that can be expressed well-formed in terms of any set of those axioms, but has not yet been proved by formal derivation, is a conjecture, whether it is true or not. According to Gödel, in any sufficiently complex formal axiomatic structure, there will exist theorems that cannot be derived from the original set of axioms without adding more axioms.

If Hilbert-6 is intended to deal only with confined abstract subsets of reality, such as say, probabilistics, then there is no reason to doubt that axiomatisation might have some functional value, but otherwise, if we must deal with what I have called the algebra of physics, or of reality, there are at least two problems that I see as likely to render the problem intractable in several classes of respects.

The first (assuming that one accepts the reality of reality, as I do, though some ontologists and metaphysicists do not) is the imperfection of our understanding of physical realities: we have not yet established any demonstrably ultimate TOE or GUT, and it is not yet clear that we ever shall. For all we know, such things might be Gödel-undecidable, or undecidable on completely non-Gödelian terms. Among other things, this might suggest that our assumed primitives are speculative at best, however attractively they accommodate and support our axiomatic structures.

Essentially, as I read it, Hilbert-6 deals, not with purely formal mathematics, but with applied physical concepts, questions of objective reality, and accordingly must deal with the requirement I propose: that the axioms must match the application and pass the test of successful empirical prediction. And to rely on that, we need to to recognise suitable primitives, meaning primitives for which our assumptions are true in all relevant senses. They then could match Euclid’s assumptions that his axioms were literally true (whether self-evident or not).

Secondly, if Hilbert-6 is to apply to all the most fundamental physical primitives, which ultimately would imply that the axiomatic structure plus its theorems and conjectures would embrace all of reality, then it is not clear how large the smallest adequate axiomatic structure would be, or how useful, in our ability to apply it to any interesting or creative field of problems.

This concern arises in various ways:

  • We do not know whether an adequate axiomatic structure and its application would be formally applicable in a simple manner. By way of analogy, remember how cumbersome Whitehead & Russel's proof was, that 1+1=2. Derivation of that proof might indeed have been justified by their formal objective, but was of less value than informal proof in any applied field, whether practical or academic. Again, the size of the first proof of the 4‑colour theorem caused it to be rejected by those who pointed out at the time that it was not humanly practical to check that proof. It is not clear how valuable an axiomatic system might be, that enables us to produce formal proofs that are impracticable to apply, when informal theories, conjectures, "convenient fictions", of applied disciplines, permit us to make functional predictions of what we informally characterise as "obvious".
  • In formulating our axioms concerning physical reality, we do not know in practice how far our levels of primitives might go down, or how many primitives there might be to our reality at all, or even whether such things as primitives ultimately exist at all in our apparent reality, or how to define or recognise them if they do.

Assuming, fairly reasonably, that such speculations turn out to be too pessimistic, and that our primitives and derivations do turn out to be manageable, we encounter a concept that I already have mentioned. In formal mathematics it is familiar, but in applied physics it is ubiquitous: emergent entities. How it would affect the rationality of Hilbert-6, I cannot guess. It certainly does happen that in mathematics major advances occur when one or more mathematicians working in different branches achieve cross-fertilisation and open a new line of work.

Nor is it clear whether Hilbert-6 would be likely to promote progress in predicting emergence, or be grossly unable to deal with emergence. But in either case, any prospects for progress along such lines, seem to me to be to be reduced if Hilbert's intention were to be aimed at independent axiom sets for relatively narrow disciplines, as might appear from his examples: "… theory of probabilities and mechanics..."

  • From a more optimistic point of view, there is no reason, on the as yet speculative assumption that Hilbert-6 were to take off on a scale entirely unexpected, that it need apply to "physical sciences" only. It might prove applicable to every form of intellectual endeavour.
  • Even if Hilbert-6 were to lead to unprecedented and unexpected progress, we cannot expect it to lead to everything in science or intellectual endeavour, including the end of informal intellectual speculation and abduction, and discovery, whether adventitious, stochastic, heuristic, epiphenomenal, or emergent. An interesting analogy would be computerised go players, in which the tactics of the AlphaGo program developed strategies totally new to human masters, by building on random experiments rather than any systematic theoretical basis. The analogy is tenuous, but still, since Hilbert-6 is guaranteed to be subject to Gödel constraints, there always is the prospect that either Nature itself, or human or computer experimenters, would happen on something that does work, but that is not accessible from a basis in Hilbert-6. We have no way to predict anything of the sort, but the speculation is intriguing.
  • Whether in mathematics or physical theory, suppose that a Gödel-undecidable theorem were to emerge, meaning one that is true but not provable in terms of the standard axioms. We still would have the option of adding axioms to render the theorem provable, though that in turn would create yet more undecidable conjectures. We might yet find a discipline according to which we could create complementary axiom structures in a given field, such that each axiom structure could permit proof the other's unprovables, but differently, so that neither axiom set can deal with the other's provables. At present however, this is too speculative to follow.
  • In assessing the material value of Hilbert-6 in exploring unknowns in applied physics, doing so formally rather than experimentally, we might simply ignore it as an intellectual tool; we might say "like charges repel; unlike charges attract" instead of referring to Hilbert-6 every time we need the established fact. This would be analogous in formal mathematics, to accepting that 7*8=56, without proving it formally each time.
    However, when we extend such principles in physics to the likes of formally predicting the rules of baseball or the red spot on Jupiter, or how to interpret expressions on the human face, or the crystallinity of arbitrary organic compounds, we run into serious problems.
    Some problems of such types might be either practical, or conceptual: we might not be able to tell whether the differences that emerge at different levels even are addressable at other levels, or whether we need to add new axioms or Gödel-undecidables at such levels.
  • Again, what might Hilbert-6 be for? Is it just for intellectual purity, or is it intended to reveal physical truths or tools that we are as yet unable to imagine in the light of informal experiment?

 

How much axiom is too much?

Half of what you learned in college is wrong;
problem is, we don’t know which half.
David Lange

When we try to pick out anything by itself,
we find it hitched to everything else in the universe.
                        John Muir

As I read the proposal of Hilbert's sixth problem, the intention was to enable a physicist, given sufficient information, to calculate the future, and presumably by inversion, the past, of any material situation, with indefinite precision, by application of the appropriate axioms. No doubt the concept was somewhat along the lines outlined by Laplace about a century earlier.

However, as I have pointed out, for any material system, such as a universe or a pond, physical axioms differ in their function and nature from the axioms of purely formal systems; the axioms for a material system must match the logical structure of the system they describe, in particular matching the interrelationships of its primitive elements.

However, since Hilbert's early days, we also have learnt that information is limited, not only limited in accessibility to us, but limited in its existence in reality. Now, the course of events in any physical system is crucially, arguably absolutely, dependent on its internal information states at any point. This implies that no calculation of absolutely precise outcomes is possible in principle. Axioms that recognise and define probabilistic or imprecise results are possible in principle, but it is not clear whether that was the sort of thing that Hilbert had in mind; in fact, I am left with the impression that he had no such view at all.

In our universe, we cannot foreseeably claim to know the primitives and can at best speculate how many of the assumed primitives that we attribute to the system are valid, and in which sense and connection. Worse still, if any of our assumptions concerning primitives do happen to be valid, we don't know which of ours are the valid ones, nor even how far we have progressed towards recognising the base stratum beyond which axioms can be no more fundamental.

In short, our axioms for any material system, let alone a universal one, differ from formal axioms in that for the foreseeable future they amount to nothing better than acceptably meaningful assumptions or conjectures. In this respect they resemble the axioms that Euclid originally adopted; and also in this respect they do not correspond to the formal axioms of modern mathematics or logic — those generally are, or are intended to be, meaning-independent.

Whether our material axioms or primitives or assumptions or conjectures are rightly or wrongly formulated, chosen, or understood, is a different issue, but in contrast to formal axioms, it matters greatly.

Hilbert's examples of: "the theory of probabilities and mechanics" might look very mathematically physical, but they not only are conjectural in their relationship to physical realities, but are fictional and context-bound; their primitives are not absolute, and are limited in their ranges of applicability. For example they ignore inconvenient deviations from the ideal in matters of attainable accuracy or of distortion of notionally rigid bodies, or even the atomic or tomic nature of matter or energy or information.

Furthermore, the assumed primitives of the physical systems not only are conjectural, but are derived by abduction and non-mathematical induction. However valuable such tools are for practical purposes of convenience and prediction, they are not formally valid; and proofs and derivations based on them accordingly are not formally compelling.

The same is true even for proofs and derivations by deduction that is based on informal assumptions and conclusions: deduction from unreliable premises is unreliable.

Accordingly, physical science has little to do with formal proof.

In accepting such assumptions in the role of axioms of physical systems, it is necessary to evaluate their cogency and sufficiency as well as applicability. The mere fact that one has chosen a few axioms that form an internally coherent and consistent structure, does not imply that they are sufficient to describe or constrain any physical system comprehensively or definitively.

Most of the objections to reductionism arise from simplistic assumptions. If we add enough axioms to our descriptions of water molecules, or indeed of leptons and hadrons, it could in principle enable us to deduce the nature of snow and oceans and surface tension from the structure, gravity, and similar attributes of a water molecule, in spite of the assertions made by opponents of reductionism, that such deductions are fundamentally impossible.

But for example in considering the nature of the water molecule, we seldom consider aspects such as its gravity or angular momentum. And if we consistently took such complications into consideration, it would paralyse our work; apart from the way water forms droplets that float as clouds, and forms waves that rise and fall, and how it packs under various pressures at various temperatures, water molecules are far more complex than Penrose darts and kites.

For one thing, they are three-dimensional. Four-dimensional, if we include time as a dimension.

And even our theory of topics as simple as Penrose tessellations, is still far from complete. And yet many schools reject the theoretical possibility of prediction of the behaviour of complex, combinatorial systems, from knowledge of, or calculation from, their primitives. In this they adopt the responsibility, which they then shirk, of proving a negative.

In general we cannot in principle tell in advance when we are in need of new axioms to deal with refractory problems, or when we can instead derive the desired theorems from the established axioms. Harking back to the Euclidean parallel postulates as an example, it took a lot of work to demonstrate that they were independent of the other axioms. Furthermore, in a comprehensive structure, we may need multiple mutually incompatible axioms, with more axioms to determine which apply to any particular problem or derivation. In the Euclidean example, the original implicit assumptions concerning the nature of space proved to be simplistic.

Another difficulty arises from Hilbert's expression: "physical sciences". These obviously are nothing of the kind; they are vaguely-bounded topics of study, not discrete entities. If Hilbert meant there to be separate sets of axioms for each such discipline, then very well, but if so, it is not easy to imagine how his Hilbert-6 problem could be coherent.

As a matter of fact, the very idea of there being separate sciences is a fiction, to put it politely. There is exactly one science. It has many topics, but if any of these topics has axioms independent of all the other fields, then we can be sure that its content has not yet approached the fundamental primitives, and accordingly is nowhere near anything like a fundamental structure of axioms; it amounts at most to a set of empirical rules of thumb for dealing with its fictions and approximations. For example, for most Earthbound purposes, one might fare fairly well on the assumption that the Earth is flat, but in our day one soon needs to accommodate far more sophisticated concepts based on additional axioms, assumptions, or observations.

Nothing we know of, nor that I can imagine, can happen at all except by events and processes. I mentioned events and processes in the section on primitives. They amount to the behaviour of our existential primitives, and in particular, their behaviour according to their fundamental (axiomatic) rules. Macroscopic events and processes, say storms, wars, artistic creations, or chess games, are not in principle predictable in detail, because they really are not predetermined. Instead their details vary with outcomes of elementary events that are themselves undetermined because of the non-existence of determining information. For example, the timing of such things as individual nuclear decays is not predetermined, but a single decay that occurs at one second rather than another, not only could affect a global war, but the actual survival of a planet millions of light years away, billions of years in the future.

On the other hand, if one hoped to achieve a single axiom structure to deal with all the nature of our physical universe, it is not clear how large the structure would be, or how useful it could be in practice if the complete structure were handed to us by some benevolent deity. Or possibly not benevolent. Forbidding the fruit of the tree of the knowledge could be argued to have been protective. Humanity is notoriously poor at dealing with information overload.

We complain of the "silo effect" in current science and education, and bewail the lack of interdisciplinary work and cooperation, but the fact is that the sheer scope of even our current scientific work is beyond the capacity of any current human. Before we can enter a new age of polymaths, we need to do some serious genetic engineering to equip the species so that humans can begin to master and sanely apply the realities of science, technology and sociology.

One crucial, and completely unanswered, question is how compact one could make a comprehensive budget of all our physical primitives. Another question is: on the assumption that those true primitives are of comparatively few types, say a few dozen or a few thousand or so, how many axioms would it take to describe all their parameters and interrelationships? Everything at a higher level would be theorem rather than axiom, and we would need to master theorems of possibly indefinite complexity.

If that scale of such complexity proved to be unmanageably large, or even just inconveniently, unprofitably, large, Hilbert-6 could hardly amount to more than an intellectual curiosity.

 

Hilbert, Axiomatics and Meta-axiomatics

The grand aim of all science is to cover the greatest number of empirical facts
by logical deduction from the smallest number of hypotheses or axioms.
Albert Einstein

In Mary Winston Newson’s translation, Hilbert’s problem six began as follows:

The investigations on the foundations of geometry suggest the problem:
To treat in the same manner by means of axioms, those physical sciences
in which mathematics plays an important part; in the first rank are
the theory of probabilities and mechanics.

When read in isolation, this is far from reassuring. Even allowing for the fact that Hilbert wrote just before repeated advances in both formal theory and empirical science confounded some of his views, the proposal is imprecise and question begging. In fairness to Hilbert, the original statement of the “problem” continues, and the text of the translation shows that he largely recognised such difficulties:

...The physicist, as his theories develop, often finds himself forced by the results of his experiments to make new hypotheses, while he depends, with respect to the compatibility of the new hypotheses with the old axioms, solely upon these experiments or upon a certain physical intuition, a practice which in the rigorously logical building up of a theory is not admissible. The desired proof of the compatibility of all assumptions seems to me also of importance, because the effort to obtain such proof always forces us most effectually to an exact formulation of the axioms...”

Still, even allowing for this caution, the sheer range of conceptual departures that invalidated much of the received wisdom of the nineteenth century, went far beyond the scope of any reasonable foresight at that time.

This essay does not deal with many of those lacunae; I primarily point out a few lines of thought concerning information and implication.

Consider:

  • What does it mean to speak of “...those physical sciences...” (“...diejenigen physikalischen Disciplinen...”) as if any branch of physics were a distinct discipline? Even if two schools of physics contradict each other’s assumptions, (say, Quantum Theory and General Relativity) that does not mean that they are distinct disciplines, just differences of opinion on the available evidence and the implications.
  • How do the assumptions required for Hilbert’s proposal for the development of an axiomatic basis for physics, and hence for empirical science and philosophy in general, differ from, or improve on, metaphysical and empirical assumptions such as the concepts of meaning and existence and their implications?
  • How do axioms for formal topics such as mathematics, logic, or axiomatics differ from other basic conceptual assumptions, such of those of empirical science?
  • What does it mean to be without an axiomatic framework for any formal discipline (for the nonce, ignore any distinction between empirical assumptions and formal axioms)? We have no global axiomatic framework for mathematics, philosophy, axiomatics, meta-axiomatics, or global formalisms in general. Does this imply invalidity for all such disciplines? If the need is compelling, and we assert that this need justifies Hilbert-6 because every discipline needs such an axiomatic framework, and that every such framework defines a discipline — then before dealing with Hilbert-6, we need to resolve the implied self-reference of the (meta)axiomatic framework of axiomatics.
  • The dilemma of meta-axiomatics then would reduce to this: that we accept that without an axiomatic basis every formal conceptual structure is invalid; this immediately implies that we need an axiomatic basis for axiomatics as well as for every other discipline, a demand that seems to invalidate itself. Conversely, without a criterion for exempting disciplines from the requirement for axiomatics, Hilbert-6 would lack any compelling justification.
  • We observe that every, absolutely every, axiom, assumption, implication, derivation, theorem, or conclusion, formal or informal, comprises information and the processing of information. We observe too, that information and implication are fundamentally physical concepts. What does this imply for axiomatics of anything, formal or informal, and any global axiomatics in particular?

 

Points such as the foregoing do not necessarily invalidate the ideas behind Hilbert-6, but they do present challenges that need resolution before we consider how to define and satisfy the problem.

Consider: the fact that Hilbert-6 begins with the analogy to formal geometry, does not exempt the axiomatist of physics from relating his assumptions to empirical physics. The axioms of geometry in our day, like any formal axioms, are content-free, but the assumptions of physics intrinsically cannot be content-free. In exploring this idea, compare for example: would “there exists at least one atom” in physics, be as useful or meaningful an assumption as the axiom: “there exists at least one point” in geometry? What would such a content-free assumption about atoms predict about the physical world? What would it predict about what we, in physics nowadays, call “atoms”?

In contrast, when classical Greeks variously assumed the material existence of what they called atoms, and implied that they were (somehow abstract) physical entities, their content-free ideas lacked material correspondence to the nature or behaviour of what we now call atoms. Their concepts fell short exactly on being content-free, or at least lacking material content, in referring to physical entities. To this day we use something very like the original Greek version of Euclidean geometry. As that is in essence a formal discipline, we can afford to neglect the classical Greeks’ perhaps unconscious ideas of material content. In contrast, the physics of their ideas concerning atoms have had little historical, theoretical, or practical connection with the physics of the past four centuries or more. In effect we have discarded those ideas wholesale. This is precisely because of the essential futility of content-free or otherwise arbitrary assumptions instead of empirical experiment, in dealing with anything that has empirical existence.

Compare such axioms with say, the Newtonian F=ma, a statement with logical content that, given criteria for empirical recognition of F, m, and a, can be subjected to empirical evaluation and development.

Such examples leave us with difficulties in interpreting or developing views based on Hilbert-6. As soon as any formal structure excludes all content related to empirical reality, we are left with little constraint on what its axiomatic content might be, but we equally are left without any compelling reference or relevance to physical reality. Conversely, as soon as we avoid that penalty, we necessarily largely abandon primary reliance on formal axioms, and have to introduce empirical assumptions that intrinsically are subject to Hilbert’s own reference to the invalidity of axiomatics based on inconsistent or inconstant axioms.

 

Formal and Empirical Science

Ordinary language is totally unsuited for expressing what physics really asserts,
since the words of everyday life are not sufficiently abstract. Only mathematics
and mathematical logic can say as little as the physicist means to say.
Bertrand Russell

In this essay I use the terms “empirical”, “physical”, and “material” almost interchangeably wherever that is convenient. Empirical, “material”, science as a class of activity, in contrast to formal disciplines such as mathematics, is not in general concerned with formal proof, and in particular not with proof of formal propositions. It deals primarily with propositions concerning empirical states and their logical and physical implications: those that reflect perceived or conceivable relationships in the material world.

When I mention “science” without qualification, what I have in mind generally is that type of empirical science. As a class of activity, empirical science deals with the abductive, inductive, and possibly sometimes also deductive, development, ranking, or elimination of hypotheses, more or less according to observation or speculation. This is the mainly abductive phase of the scientific process. And “scientific proof”, insofar as the term is meaningful, commonly is more concerned with successful prediction and control: working hypotheses, rather than formally valid argument or “proof”.

The general intent of the scientific process is to generate or note speculations, adventitious observations, or hypotheses, to derive conclusions from such inputs, and to compare their relative tenability within accessible limits of precision. At any point in the process, we adopt the currently most persuasive or most convenient assumption as the working hypothesis, the strongest conclusion at our disposal. However, practically all empirical hypotheses are abstractions from indefinitely complicated relationships within the observable universe, and we never know whether the range of hypotheses that we can develop on any material question, includes any uniquely, genuinely, correct God’s-Eye-View of underlying reality. 

In these contexts, note that experimentation, investigation, and exploration amount to various classes of active observation; observation need not always be passive.

For practical purposes in categorising scientific activities, all these considerations present less of a constraint than the layman or the “shut-up-and-calculate” scientist might assume, but as factors in discovery or epistemology they are challenging, or even disastrous, for any prospect of formally establishing axioms for empirical science according to the principles that apply in formal disciplines.

In this respect empirical science differs from formal axiomatic structures, in that the latter are not in general dependent on any external truth value or even meaning, and the assumptions of the material hypotheses need not be formally valid to justify appropriate investigation or application.

These distinctions are fundamental to the difficulties associated with Hilbert-6.

Hilbert’s concepts of the nature of science were hardly more coherent than others of his day, and even today naïve falsification and positivism remain prominent in science. If axioms were to be established to cover all of science or at least basics of science, or of branches of science, not only would they need to accommodate material assumptions, but also the question of how far the concept of formal proof in empirical science is meaningful at all. Any new version of Hilbert-6 would have to face or dismiss such challenges.

Conversely, if one insists on applying the idea only to the formal aspects of science — its descriptive abstract mathematics and logic for instance — it is not yet clear what any such new concept of axiomatisation is to achieve; it certainly could not adopt the same role as in formal disciplines. 

Hilbert himself seems to have failed to clarify the concept. He spoke of those physical sciences in which mathematics plays an important part, in particular mechanics and probability theory: ...in denen schon heute die Mathematik eine hervorragende Rolle spielt; dies sind in erster Linie die Wahrscheinlichkeitsrechnung und die Mechanik...”. Not only is no branch of physics independent of the “important part” played by mathematics, but such mathematics are just as formal as any other, and subject to similar axioms in similar ways. However, they furthermore have constraints on their relevance to the subject matter. This is not logically the same as having “axioms to treat physics” (“... physikalischen Disciplinen axiomatisch zu behandeln...”).

One of the consequences is, as Hilbert himself observed, that in the progress of physical research it is routinely necessary to change assumptions in ways that would be inconsistent with the principles of axiomatics in formal disciplines.

An aspect that Hilbert did not mention is the problem of demarcating distinct aspects, either of science or of formal mathematics, so as to support separate sets of axioms for each subdiscipline. After all, no comprehensive discipline such as mathematics has a comprehensive, single axiomatic structure. Similarly, there is no clear reason why physical science should have no more than one. The identification of sub-disciplines that justify their own structures of axioms, would be a challenge in its own right, and the question remains: must the aspects that it would be valuable to axiomatise include anything other than formal mathematics or logic?

Another aspect, to my mind even more important, is that Hilbert did not differentiate between established physics, and research into new or unanticipated topics in physics. It might be possible to to axiomatise well-understood fields of science in terms of expected modes of events, such as are to be relied on in applied technology, but it does not follow that the same is true for research. 

Research, particularly in empirical fields, is notoriously difficult to predict or, in advance, even to define meaningfully. Einstein himself has been quoted as saying, with characteristic penetration: "If we knew what it was we were doing, it would not be called research, would it?" Formal definitions and operations tend to be strictly constrained, and yet the constraints often turn out to be inappropriate to empirical fields of study; for example it would have been very difficult for anyone undertaking research in physics, much less for anyone trying to axiomatise the field of study in the first decade of the twentieth century, to anticipate the need to take into account quantum values such as spin and strangeness. Nor does it seem possible, even now at the time of writing, to be confident that any axiomatic formulation of physics could avoid invalid assumptions that future discoveries would reveal to be blunders. Obvious examples of troublesome topics might include the quantum nature of gravity, and the riddles relating to dark matter and dark energy, but for more than a century now, we have seen predictions such as that "There is nothing new to be discovered in physics now. All that remains is more and more precise measurement".

Empirical reality in science, whether in research or in established application, entails complexity, underdetermination, and emergence. Structures of axioms for sets of primitives soon encounter problems of complexity, in which emergent effects arise that have no clear basis in terms of the primitives. This is true for both formal and empirical systems, otherwise Gödel’s principle could hardly be valid.

The relevant nature of the primitives constrains the nature of the events in which they can participate, and in what kinds of roles. In effect this constraint determines the range of the operations that those primitives can perform or undergo. That in turn defines a logical and quantitative algebra of primitives, whether physical (material) or not, implying that axiomatisation is in principle possible.

Unfortunately, things are not so simple; one certainly can specify abstract primitives and construct matching axiom structures for or from them, such as has been done variously for disciplines such as epistemic, temporal, and other modal logics, but in physics, if one wishes to model a logic for actual reality, then one at least needs to know how closely one’s specified notional primitives match existing physical primitives, and at what levels. For example, why should we expect different prevalence of matter and antimatter in our universe? And how well will the behaviour of primitive particles in a population of 1 or 2 or 10 particles match their behaviour in a population of say 10^50 particles, and at what densities?

An algebra based validly on physical primitives might be expected to answer such questions from first principles. But which primitives, and how might they be determined?

The next problem that arises, is whether or how to deal with emergent entities ad hoc, by extending the axioms that deal with the primitives or structures emergent from the primitives. Consider by way of analogy, the derivation of the concept of complex numbers from number theory, or the effects of gravitation on droplet shapes. More drastically, how does one extend physical axioms of the Standard Model of particle physics, to encompass, let alone predict, the emergence of life, or politics? We certainly cannot claim that politicians are independent of the realities of particle physics as a basis for reality, but how many levels of axioms or theorems would we need to include to cover all the conceptions between?

And in physics we do not know the primitives. We do not even know how far from knowing the primitives we might be. We assume some examples of primitives, but even for those that we might assume, we do not know their full repertoire of behaviour. This ignorance might not kill the Hilbert-6 initiative, but it certainly hobbles it drastically, leaving us with assumptions and inferences that at present are uncompelling at best.

In practice, and this might be what Hilbert intended with his idea of axiomatisation of “theories of probabilities and mechanics”, we commonly create new, fictional levels of primitives, largely independent of the real primitives, and we base “axioms” on them, or we create new axioms to describe the emergent ideas — or both; the thoughts of various pioneers in such matters commonly are arbitrary and often mutually inconsistent. Those “axioms” then are not real axioms at all, but assumptions, and commonly simplistic assumptions at best, or otherwise mistaken. They are treacherous if taken beyond their levels of justified confidence.

The reality is that formal axioms can be definitive, and generally should be, but that physical assumptions cannot be, and especially not unless they are based compellingly on physical primitives, which in our case we still do not know. For the foreseeable future our assumptions necessarily are ad hoc hypotheses.

 

Information, Observation, Measurement, and Description

We have to remember that what we observe is not nature in itself
but nature exposed to our method of questioning.
Werner Heisenberg

Formal statements need not have any specific subjects or objects. In general, if they do have anything of the type, those subjects or objects generally are no more than formal themselves. If I follow the formal axioms of, say, finite natural number theory, I may assert or deduce with confidence that x+1>x, but as soon as I try to extend the deduction to empirical topics, I encounter all sorts of qualifications; such an inequality is not necessarily true in every way, of apples, water droplets, clouds, charge carriers, camels’ burdens, or waves.

Even in formal disciplines it is necessary to make sure that the universe of discourse is sufficiently well-defined before identifying one’s primitives — consider difficulties such as constructivism in mathematics for example — but in physical science the need is far more pervasive and demanding; in fact, one cannot even talk about setting one’s assumptions before studying the circumstances, processes, and objects under study. That was the essence of the invalidity of the classical Greek notions of atoms for instance. Again, conflation of the formal with the empirical certainly invalidated part of their view of geometry, leaving a core upon which formal geometry eventually could be based.

Irrespective of what might apply in formal disciplines, any assertion in physical contexts, any proposition concerning physical entities, generally involves semiotic considerations such as:

  • a physical source, such as a speaker, an empirical phenomenon, or a written message
  • a code, such as a language or notation, and
  • the storage, transmission, and commonly also the processing, of information. And in particular, that information will concern some object; it will not be purely formal.

Once the assertion has been composed and expressed or transmitted, there should be matching information at the receiving end and at the source, and as a rule also at the object that the information refers to. To the extent that any of this does not follow, one might fairly assess that the communication has failed. Furthermore, if the information at the various nodes of the communication relationships fails to correspond in particular ways, communication has failed in those ways. For example, it might successfully transmit symbols and syntax, but fail in conveying the semantic intent.

Let us assume successful communication with the appropriate correspondence between source, receiver, and subject. It need not be complete or precise correspondence, nor contextually comprehensive; in fact it necessarily never is, but what matters in our case is what we commonly may call isomorphism: literally meaning: sameness of form.

Here I do not refer specifically to the common mathematical term “isomorphism” for structure-preserving mappings — my usage is something far less rigid: in our physical world “isomorphism” could be any sort of appropriate logical correspondence, whether in shape, count, parity, sequence, relationship of parts, or more. This concept of isomorphism might not even be symmetrical.

In practice, in the physical world, what we commonly regard as a correspondence in this sense never applies absolutely and formally. It fairly commonly might apply roughly, say, well enough to satisfy all parties to a transaction, but even when some limited aspect of interest does match, there always are differences, if only in coordinates, and one difficulty is to decide in which contexts those differences are sufficiently irrelevant to be neglected without untoward penalties.

Communication is only one example of that sort of striving after isomorphic correspondence. Control is another, although the two concepts are not fully distinct. In controlling say, a vehicle, its speed, its route, its economy, or anything else, the isomorphism between the intention in the mind of the driver, the movement of the controls, the route, situation, speed, noise, and more, all involve physical isomorphisms of such types. Control of a thrown ball in sport is similar in principle.

Correspondence of objects is another example. If we want two crystals of sodium chloride, or two statues of bronze, exactly the same, and somehow we miraculously start with such a pair, perfectly matched in shape, mass, and whatever we controlled for, we still would wind up immediately with different atomic and isotope counts, and the paired objects would vary in their rates of evaporation and reaction with the air. Such differences are incredibly pervasive; recent experiment has established the gravitational difference even between adjacent atoms in a crystal.

These considerations are inseparable from observation, measurement, description, interaction and prediction. We never can achieve perfect precision in any such thing, and we never can put a limit to the consequences of even the tiniest difference, even within Planck limits.

And yet the correspondences, however imperfect, remain ubiquitous and vital throughout our lives. Their isomorphism is not true in the sense that a mathematical or formal isomorphism is true, and the differences can be as important as the correspondences, as we can see in everyday chaos: potting one snooker ball with one shot might be good, but validly potting two with one shot would be surprising, and validly potting four with one shot would be incredible. Symmetry breaking too, is ubiquitous in our world, and balancing a needle on its point simply is not generally possible, and even if it were possible, predicting its eventual route of toppling and coming to rest would not be possible.

These concepts are so widely important that I have appropriated the term “plesiomorphism”, meaning, not identity of form, but sufficient nearness of form in context, to describe the imperfections of our everyday “isomorphisms”. Taxonomists use the word “plesiomorphism”, or some of its derivations, in cladistics, but that sense is so distinct and specialised that we can ignore their usage as irrelevant.

Mathematicians often object to this usage of the term “isomorphism”, which is deeply entrenched and precisely defined in mathematics. This however, does not disqualify its use in other disciplines, and “isomorphism” is established in the terminology of disciplines such as crystallography and sociology, in distinct meanings that lead to no confusion in context. There is after all a wide range of terms borrowed from informal language for formal or technical use, or vice versa, or between disciplines; consider: “bit”, “work”, “infinitesimal”, “equality”, “imaginary”, tangent”, “evolution”, “nervous”... There are thousands more.

Accordingly I feel no pangs of conscience in appropriation of “isomorphism” to refer to “identity of form” in a wide range of contexts, or of “plesiomorphism”, which in our sense means nearness of form, though not typically identity of form, either in mathematical physics or the “physical algebra” of objects that are subject to physical operations.

In this essay I do not discuss the concept of plesiomorphism in depth, but shall show that some of its applications could suggest approaches, either to the destruction or salvation of the concept of axiomatisation of physics, or to concepts of similarity of force, but difference in nature.

 

Formal and Physical Mathematics

It is impossible to do exactly one thing; in this universe things themselves do things.
Anonymous

Mathematics in general emerged originally at various times and places as what we now would call applied mathematics: the representation of physical realities in such functions as illustration, measuring, predicting, forming, and counting, largely of material things. The idea of abstraction of formal mathematics without reference or relevance to physical realities, developed gradually down the ages, largely in a fog of contradictions and confusion. The earliest example of axiomatics that has survived, as far as I know, was that of Euclid, and its development began under the apparent delusion that it was a true reflection of physical realities: that is, applied mathematics. The idea of a Euclidean point or line we now can confidently state to be a physical impossibility, most relevantly because it would require infinite information to determine anything of the type.

This did not stop us from using Euclidean geometry rewardingly, even though nearly everyone using it failed to realise that what they drew or measured was not even an isomorphism of the underlying formal geometry, but a fiction or a caricature. It differed not only quantitatively from the object represented, but also qualitatively; its points were not points, its lines not lines and its planes not planes. Its measurements were not merely wrong, they were not even consistently wrong.

And yet, Euclidean geometry was, and has remained, immensely useful, arguably the second-most ubiquitous branch of mathematics. We really cannot imagine doing without our caricature of geometry. If what the caricature amounts to is not an isomorphism, what could it be?

A plesiomorphism, say I.

A plesiomorphism is a model of the mutual relationship between relevant attributes of systems under consideration. As I have shown, entities participating in a plesiomorphism need not resemble each other any more than a sheet of paper with pencil marks on it, or a beach gouged with sticks, resembles the building it represents for the builder, or than the view through the sights of a Howitzer, resembles the target or the path to the target, or than the ruler measuring the string or cloth to be cut, resembles string or cloth, or than the force of my hands or feet or voice on the horse I ride resembles any intended change of momentum.

What a plesiomorphism most ideally does represent is the difference between:

  • what happens in the course of events in the presence of all the participating entities in a situation; and
  • what happens if such participation is changed or omitted, or,
  • when widely differing situations evolve in subjectively analogous, but not precisely identical ways, such as turbulent flow of liquids, or schooling fish, or human crowds.

Plesiomorphisms do not necessarily imply that only one relationship could in principle imply only one outcome; they are not generally precise, and even if they were, there are notionally different ways in which even ideal causality could bring about similar, plesiomorphic, outcomes. This last point is the basis of underdetermination.

Consider some classes of limitations on the precision of outcomes:

  • Euclidean points, lines, planes, and similar spatial regions.
    Their coordinates are defined with precisely zero latitude. They have no physical existence matching their notional conception. Given such notionally perfect precision, there need be no difference between plesiomorphism and isomorphism. I am not sure whether symmetry breaking could apply in such a universe. The reason that we can neglect such notional entities, is that it would take infinite information to determine any of them, and our observable universe has no capacity for physical infinities.
  • Infinitesimals, such as those postulated in Newton’s/Leibnitz’s calculus.
    They are not single ideal points, mathematically speaking, but they have no more physical reality than points do.
  • Limits are trickier concepts and replaced infinitesimals in the mathematics of calculus. Still, they too have no material reality.
  • Planck units of mass, length, time and other variables are physically real; they are at about the limits of the precision available for practical measurement, either by instrument, or by physical event. Though small (the Planck unit of length is of the order of 1.6X10-35m), they are not mathematically point-like and are not commensurate with infinitesimals or limits. However, the outcome of events that demand greater precision than Planck units is physically unpredictable because of the quantum nature of physics.
  • Point-like events. Even if we were to ignore quantum mechanical constraints, or mathematical limits, certain classes of precision are beyond physical prediction; for example, when positively convex rigid bodies or sub-atomic particles make contact, they notionally meet at a mathematical point, or at any rate, not a Planck-unit area, and if they collide, the direction of rebound will never be the same unless from notionally the same mathematical point, of which the probability is zero because notionally that would require infinite precision.

In practice the contact could never be zero either, but its measure could well be less than a Planck unit. It hardly matters whether this is of practical interest in a world in which quantum mechanics apply — the fact remains that, irrespective of quantum considerations, perfect precision or infinite information remains unattainable in any sort of world that matches our empirical reality.

  • A related concept that may require careful development in considering plesiomorphism and underdetermination, is that they commonly deal with, not unique or precise outcomes, but with ranges or classes of outcomes. Consider the toss of a coin; we regard the outcome as binary: heads or tails. In fact there is a large class of outcomes, most of which we ignore or do not distinguish. The coin might land on edge, or get lost, or broken; it might land in the dark or in mud, or land in various places where it could indeed be read, though the reading might not be the same to all observers. It might land acceptably, but in any of millions of variations of the direction in which way the exposed face was oriented, rotated relative to the compass direction. The compass rotation we regard as irrelevant, generally being interested only in which face was up, but in ignoring irrelevant variations, we need to be careful in distinguishing between the relevance of the concepts of underdetermination, plesiomorphism, and isomorphism.

The main implication of Planck units and of point-like events is that, in defiance of our conceptions of mathematical idealisations such as points, genuine randomness is ubiquitous in our world, both because of quantum principles, and also because of mathematical limits to physical information. Apart from any other concept of randomness, some information about any event or object always remains unavailable, not because it happens to be unknown, but because the information does not exist at all; it is not real. Accordingly, true physical isomorphism is impossible because it too, would require infinite information. Any event dependent on that missing information is to that extent truly random, because any bias necessarily and intrinsically requires information, and information, whether noise or signal, reduces ultimate randomness.

And such reasons are why plesiomorphism is ubiquitous, while physical isomorphism is a simplistic fiction. While physical isomorphism would require infinite information, plesiomorphism requires only sufficient information for particular practical purposes. Accordingly plesiomorphism does not lend itself to precise prediction or measurement, whereas isomorphism should in principle permit prediction and measurement of arbitrary precision. One point of view might be that the challenge to physical measurement is to minimise the discrepancy between isomorphism and plesiomorphism.

And that too is why Hilbert’s suggestions of recourse to the theory of probabilities and mechanics as examples of topics for axiomatisation of physics were futile. As bodies of theory they could be mathematically precise, but as descriptions of physical reality they were plesiomorphic. Though he could not have foreseen the emergence of quantum theory, he could in principle have predicted the limits to physical predictability and isomorphism, even if we accept that formal information theory was still a few decades in the future.

However, he said:

If geometry is to serve as a model for the treatment of physical axioms, we shall try first by a small number of axioms to include as large a class as possible of physical phenomena, and then by adjoining new axioms to arrive gradually at the more special theories.

(...Soll das Vorbild der Geometrie für die Behandlung der physikalischen Axiome massgebend sein, so werden wir versuchen, zunächst durch eine geringe Anzahl von Axiomen eine möglichst allgemeine Klasse physikalischer Vorgänge zu umfassen and dann durch Adjunktion neuer Axiome der Reihe nach zu den specielleren Theorieen zu gelangen...)

This all implies that the very concept of axiomatisation of those physically constrained disciplines is not well defined, whereas axiomatisation of the mathematics applied in physics does not significantly differ from any other mathematical axiomatisation. It therefore seems to offer no special importance in theory or practice. Some approach on the lines of fuzzy logic might hold promise, but I am not aware of any spectacular advances in this field. Axiomatisation is of wide academic interest, but rarely could play a role in discovery or proof in physics, in particular not before the physical primitives are well-defined.

Although, in physics, information that corresponds between matching systems or entities is ubiquitous in causation, measurement, mechanism, or interest, no such matching ever is absolutely precise. Measurement is a special class of physical interaction between objects, intended to abstract information from plesiomorphism between the instruments and the object. But nothing that Hilbert said seems to have addressed this as one of the fundamental facts of existence, and I do not assert that he had any inkling of the topic or attached any importance to it.

More fundamentally however, every physical interaction varies according to the quantitative parameters leading to the event, and so every interaction is in physical essence a measurement; the only thing missing in most cases is the intent to measure; if I toss a fair coin, I do not care exactly how it bounces, even if information on every impact and vibration were available. All the same, this sort of measurement, this quantitatively consistent interaction, is the basis of physical causality and plesiomorphism. It supplies the “observer” in quantum-theoretic riddles such as that of Schroedinger’s cat.

Variables affecting determination and underdetermination may be primitives, in which case the practicability of devising axiom systems for physics depends on how many types of primitives there are. If there are very few, the axioms would address mainly the most reductionistic aspects of physics, such as the interaction of elementary particles or other primitives, rather than between derived or emergent compound entities. If there are unmanageably many types of distinct primitives, the concept would be of limited value in determining axioms.

Finally...

On a huge hill,
Cragged and steep, Truth stands; and he, that will
Reach her, about must and about must go,
And what the hill's suddenness resists, win so.

John Donne

Commonly, physical variables do occur in large collections or structures of compound entities, each consisting of more primitive objects in particular interrelationships. Such compound entities are almost indefinitely underdetermined, and we ignore their internal plesiomorphic relationships, for convenience regarding them fictitiously as isomorphisms. We speak of rigid bodies, spheres, lines, clouds and the like, minimising and ignoring discrepancies in the simplistic ideals.

This is not a matter of approximation, but of intrinsic lack of sufficient information to define absolutely precise parameters of any entity, whether by measurement or interrelationships with other entities.

Compound entities are indefinite in number, form, class, and consistency. We face the dilemma: whether to regard them reductionistically as consistent and relatively simple, reducing their number of distinct identities or classes, or simply to admit that they are beyond axiomatisation. Instead of axioms, we then make do with convenient assumptions and plesiomorphisms.

I suggest that any approach to construction of any set of axioms, assumptions, and operations in physical reality, should take account of the concept I discuss in this essay if it is to have any pretence at conviction, let alone function. In short, until some relevant axiomatisation took the principle into account, I do not believe Hilbert-6 could have general relevance to reality; we need to develop a theory of plesiomorphism in physics before we can so much as discuss formal axiomatisation of physics.

Formal mathematics and physical realities are not mutually isomorphic; they are plesiomorphic at best. To what extent, and in what sense, physics is at all isomorphic with reality is a separate question, and one that we shall remain unable to determine before we have determined the primitives of physical reality, and their nature, and the ranges of interactions that they can undergo.

Whether Hilbert intended it or not, reflection on his sixth problem refutes speculation that we might be at the threshold of the “end of physics” or even the “end of science”. That absurdity was suggested near the start of the twentieth century, and now has been repeated near the start of the twenty-first century. Before anything of the kind were imminent, we would need to:

  • Establish the primitives underlying our empirical world, primitives that as yet we have not even shown to be definable in principle.
  • Develop a coherent theory of emergence from any class or level of primitives or structures.
  • Develop a coherent theory of determination and underdetermination of states or events.
  • Develop a physical algebra of the objects and operations that comprise our empirical world.

To what extent any such challenges originally contributed to the conception of Hilbert-6, and whether they offer a basis for any of Hilbert's hopes for an axiomatic or meta-axiomatic structure for dealing with them, is unclear — but that matters less than that we have no hope of a fundamental understanding of our universe without at least achieving some perspective on such questions. Equally we need some conception of how many other unrecognised classes of challenges belong in any such list (Consider for example the nature and implications of subjective consciousness).

 

Reference:

I cannot find a satisfactory reference to the proceedings in which the original essays appeared, from which I composed this essay. The full link, which works for me, is:

 

https://d1wqtxts1xzle7.cloudfront.net/84303356/XX_FAMEMS_2021_Conference_VI_H_s6P_Workshop_ISBN_978_5_6045634_1_0_80pp-libre.pdf?1650176455=&response-content-disposition=attachment%3B+filename%3DProceedings_of_the_XX_FAMEMS_2021_Confer.pdf&Expires=1653902179&Signature=QCfkR3PTluElpmVmxqCL~6y1yNBhfkQvMTYKo27gqO5-ZV38i9M7xgGgajnk5a55cFUvjmRGFJ0kftXk6o3MTrRh6-H~vmCelBnWNbcjDQxLjBpY67sp7pigc84M42QhzimC~PeA2PYFMRi1O6WfqfaQz4zdyHHallOn16~D33VGUoTcvKM4XjRct1FJDmbHxOEH7EGaoFpRs3MdXhYHl-HgQ87U-lBh5F3ilq5qSzkiQZH1JX-v7Jea-UmMiomD7GOtfv468bh5vuuvaDO9A6jWOxCAH8VSTBjE1JZrLs~RqVlyOkvJusCZyw4uAS7f8bJToUzXxZJeLNb0dgs4hA__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA