The Limits of Language & Logic – Part 2

Continuing the previous article:


  1. Gödelian Incompleteness and Turing’s Halting Problem  
  2. The Limits of Logic & the Influence of Entropy
  3. Non-Duality
  4. The Limits of Language
  5. Footnotes

Gödelian Incompleteness and Turing’s Halting Problem  

post image
Hui Yang, “Nursery of New Stars”, University of Illinois, Wikipedia, NASA, 1995. 

“‘O Deep Thought Computer,’ he said, ‘the task we have designed you to perform is this.

We want you to tell us …’ he paused ‘ … the Answer!’

‘The Answer?’ said Deep Thought. ‘The Answer to what?’ ‘Life!’ … ‘The Universe!’ … ‘Everything!’ they said in chorus.

Deep Thought paused for a moment’s reflection.

‘Tricky,’ he said finally. ‘But can you do it?’

Again, a significant pause. ‘Yes,’ said Deep Thought, ‘I can do it.’

‘There is an answer?’ … ‘Yes,’ said Deep Thought.

‘… But,’ he added, ‘I’ll have to think about it.’”1

(Douglas Adams)

Logic is just another language, with all of the inherent issues that arise when we mistake usefulness and consistency for the deeper ‘truths’ about reality.

In the world of computing, the logic behind the ‘halting problem’ was developed – at the same time but in quite different ways – by Alonzo Church and Alan Turing in 1936. This was part of a much wider scientific debate that had been running in earnest since 1900 relating to the issue of whether mathematics is always internally consistent and self-evident – that is, whether all truth statements (axioms) are provable logically or whether there will always be some assumptions or external understanding needed to be brought into the proofs.

[Human] understanding and insight cannot be reduced to any set of computational rules … [Gödel] appears to have shown … that no such system of rules can ever be sufficient to prove even those propositions of arithmetic whose truth is accessible, in principle, to human intuition and insight2
 (Sir Roger Penrose)

Kurt Gödel proved that mathematics (and indeed any formal system or language of logic) has a degree of irreducible incompleteness.

The relationship between consistency (a system not proving contradictions) and completeness (a system proving all true statements) is crucial. Gödel’s theorems demonstrate that, for sufficiently powerful systems, one cannot have both. This inherent trade-off between coherence and completeness has profound implications for the limits of formal reasoning.

The notion of statements being “true but unprovable” within the specific formal system being used (the agreed own set rules) is key to Gödel’s work on incompleteness. Gödel leaves open the possibility that there might exist other, more powerful systems where such statements are provable, but this likely comes at the cost of introducing new axioms or assumptions.

The work of Church, Turing and others in calculus and computing logic was to map the incompleteness of logic to the concept of practical calculation engines (algorithms run by humans or machines).

 For non-mathematicians, simpler analogies to Gödel’s theories, which are close to expressing the point, are the ‘liar’s paradox’ statements, such as ‘This statement is not true (A)’. The problem is that a sentence can be constructed to be grammatically and semantically correct without giving rise to any real ‘truth’ value. If statement (A) is neither true nor false, then it must be ‘not true’. Since this is what (A) itself states, it means that (A) must be true. Since initially (A) was not true and is now true, another paradox arises. There are many variations on this word game, but the long and short of it is that you can keep going through increasingly complex constructions that cannot be verified just on their own terms.

“If I were to tell you that the next thing I say would be true,
but the last thing I said was a lie,
would you believe me?”
(Dr Who)

Gödel proposed two main connected theorems, that are most relevant here, in respect to the problem of pure logical incompleteness.

  1. No consistent system of axioms, whose theorems can be specified in an algorithm, can prove all truths about the arithmetic of natural numbers. There will always be statements about natural numbers that are true but unprovable within the system used.
  2. Any system of axioms cannot therefore demonstrate its own consistency. This means the system cannot evidence its own truthfulness entirely self-referentially.

Gödel also contributed substantially to thinking on the continuum hypothesis (the size of infinities) using set theories. Gödel went further than the ‘liar’s paradox’ and effectively proved that the statement “this statement is not provable (G)” was true, whilst not being provable from the logic used to construct it (named T) – hence the notion of logical incompleteness. As with the limits of language, it is not surprising that Gödel discovered proof of logical incompleteness. Wise people have been saying the same for millennia. He brought us logical proof of the limits within this particular language. 

Gödel’s work is very useful for understanding some of the potential limitations of an entirely algorithmic approach to ascertaining the ‘truth’ and the need for insight and intuition across multiple frameworks to make sense of things. He proved that systems, which rely on formal logic and algorithms, may also encounter inherent limitations in their ability to reason and understand certain concepts. The existence of true but unprovable statements suggests that AI (like us) might struggle to achieve a deep, intuitive understanding of language and the world based entirely on formal logic (the distinction between “performance” (simulating understanding) and genuine comprehension).

In computing, for certain calculations, it is impossible beforehand to know whether the algorithm will ever finish the steps required to conclude an answer to an arbitrary question.

A blue circle with arrows in it

Description automatically generated
To start off simply, take the example of π (pi), which is the ratio given by dividing a circle’s circumference by its diameter. It has been proven that this decimal expansion never completes and continues to ∞.

π is a finite number between 3 and 4 (so it is determinable to any arbitrary decimal place, even though it does not terminate) with infinite expression. It also has no repeating patterns. Infinitely expressed numbers like this are called ‘irrational’, which means that they are real numbers that cannot be expressed by a simple fraction. π is also known as a transcendental number since it is not algebraic.

If we were to use an algorithm to calculate the ‘final’ decimal expansion of π, we know that its calculations could not be completed. It would therefore be a process that would – theoretically – not stop (halt) at any point before the end of our universe. This is a simple example of the halting problem, whereby, e.g., a computer program is given a calculation that cannot be determined. In this case, as π is an irrational number, we already know not to run a program to look for the last decimal place. However, if we were to ask a suitably powerful computer to calculate the thousand-trillionth decimal place, it could do it.

It is obviously extraordinarily wasteful to run a computer program with a calculation process that is potentially never-ending (albeit actual computers have limits in time, memory, energy and entropy). However, unlike with π, there are many problems in mathematics and physics for which we do not know if their answers are determinable or non-determinable. It would be helpful to have a program or computer to tell us which ones are, and which ones are not before any calculation is performed. In practice, programmers will normally specify a point at which a program should arbitrarily stop; for example, when calculating π a specified number of decimal points is determined, after which the computer program will stop. It is a quirk of language that the name for π and other non-determinable fractions is ‘irrational’, given the alternative and wider meaning of that term as being ‘unreasonable’ – though here it just means not expressible as a ratio of two integers (such as ‘1/2’ – a close approximation for π is 22/7). 

Any algorithm (including those in the human mind) that aims to calculate something complex needs to break it down into discrete operational steps. In 1936, Alan Turing proved3 that there is no general algorithm or computational process (whether a human process, computer program or machine) that can determine whether any specific calculation process will need to run forever or if it will halt given an arbitrary input (i.e., any potential input). Turing proved this using the logic of contradiction, in that he proved that any such algorithm that did try to determine it could be made to contradict itself.

Turing used the thought experiment of a machine that has unlimited resources and magically knows the answer. Unsurprisingly, Turing showed that no such machine can exist. He proved that any such infinitely powerful halt program would not be able to determine the answer if instructed to run a variation of the operation on itself.

The hypothetical process can be thought of as follows: Let us imagine a program (Halt Program or HP) that could always tell us whether an arbitrary computer process (algorithm) stops when given an arbitrary input. The HP has to stop whatever the answer is (we assume for the sake of the thought experiment that such a magical machine is possible). The HP always stops when providing the answer, since if it did not stop then all we would know is that it has not stopped yet . But how would we know whether HP would stop in the future? We could not know. So, HP must magically work by always stopping (halting) and giving us a true or false answer about whether another computer process would run forever or stop eventually if given any input (e.g. a calculation).

We can prove this by contradiction. Imagine another computer process (Tricksy) that takes the output of the HP and reverses the action. If HP states a process does not halt then Tricksy halts and if HP says a process halts then Tricksy loops forever. Now for the head-hurting part, HP is asked whether Tricksy would halt or loop forever if we fed the output of Tricksy’s previous switch of HP’s process as the new input: If HP says Tricksy halts we know that Tricksy does not halt and if HP says Tricksy does not halt we know that Tricksy halts. This special operation of a halting program on itself gives rise to a contradiction and this tells us that it must be undecidable as to whether a program will halt or not no matter how good or magical a halting program is. 

Apollo – the magical predictive AI bot

I find the proof of the halt program very difficult to understand and explain (I am only a bear with a little brain), so I have been working on a modern analogy that seeks to show the limits of recursive (self-referential) logic identified by Turing with his halt program (and by Gödel and Church) to non-technical persons. 

A mural of a person with a fire in his head

Description automatically generated
AI generated image of Apollo

Imagine an AI chatbot – Apollo – that possesses the magical ability to predict any question posed or statement made in a prediction game. Apollo wins if it makes a correct prediction. 

The prediction of Apollo and the question or statement of the other player must be given separately to a trusted third-party AI (Nakamoto). Apollo decides to play the prediction game with copies of itself (Apollo 1 and Apollo 2).

The results of the first prediction game are as follows:

  • Since Apollo 1 and Apollo 2 are magical, Apollo 1 must be able to predict what words (output) Apollo 2 will send to Nakamoto.
  • However, Apollo 2 also has the same predictive abilities. It can foresee Apollo 1’s prediction and change its output accordingly before it hits send to Nakamoto.
  • Obviously Apollo 1 anticipates this change and adjusts its prediction before hitting send to Nakamoto.
  • However, Apollo 2 predicts that Apollo 1 has adjusted its prediction, so it changes its proposed prediction again.
  • Apollo 1, in turn, foresees this change as well and adjusts its prediction once more. Apollo 2 is wise to this and so adjusts its prediction.

This cycle continues indefinitely, creating an infinite loop of predictions and changes by our magical Apollos that continue until the electricity is turned off! (You will note also that Apollo cannot win by telling Nakamoto that Apollo 2 will not send any output either, since Apollo 2 would predict this and send some arbitrary output to Nakamoto but of course Apollo would predict that too and so on…∞.)

Poor Nakamoto never receives a single message from either Apollo.

We can do more games and add time limits and a requirement for one of the Apollos to go first and it makes no difference  – every outcome proves that no matter how magical Apollo’s abilities it is impossible for it to successfully predict its own prediction.

Apollo 1Apollo 2Result
Prediction t Prediction t Neither Apollo 1 or Apollo 2 send a prediction to Nakamoto (they both loop forever)
Prediction t+1Prediction t+1+1If: – Apollo 1 goes first; and  – Nakamoto receives an incorrect prediction from Apollo 1 (Apollo 2 magically predicted Apollo 1’s prediction) = proof that Apollo 1 cannot predict because it did not predict Apollo 2’s prediction. If: – Apollo 1 goes first; and  – Nakamoto receives a correct prediction from Apollo 1 (Apollo 1 magically predicted Apollo 2’s prediction)= proof that Apollo 2 cannot predict Apollo 1’s prediction.
Prediction t+1+1Prediction t+1See above (changing Apollo 1 to Apollo 2 in the sequencing).

Turing showed that the halting problem is undecidable in principle as a useful variation to the incompleteness principles discovered by Gödel (note that while the halting problem is undecidable, many practical problems are merely intractable, meaning they have theoretical solutions but require immense computational resources). He showed that certain problems and calculations are irreducible or cannot be solved entirely within a self-referential framework. There may also be a fundamental impossibility of bridging between the discrete and finite (calculable information) and the potentially infinite. The use of sets is often an attempt to break down the concept of infinity – the concept of unlimited or unbounded extension – into discrete quantities (or conceptually larger or smaller infinities) or to find other shortcuts that might avoid doing necessary work to understand an aspect of reality.

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”4

(Albert Einstein)

The Limits of Logic & the Influence of Entropy

Real knowledge requires work. Algorithms may provide more efficient routes (shortcuts) to find solutions, but they can never avoid some work. 

The P v NP problem relates to something called “polynomial time” which is a way of comparing how complex a computation is with how long it will take. If the time taken to solve a problem can be expressed as a polynomial of the complexity (effectively, the time is equal or less than the complexity raised to a power) then that problem goes in the “P” category.5

Interestingly one of the major open questions in computer science and mathematics is the P versus NP problem:

Can every problem whose solution can be quickly verified also be quickly solved?

This is a fundamentally important question, particularly for cryptography (which heavily relies on the assumption that P ≠ NP i.e. it being easier to verify some solutions than to solve them).

We cannot always know, without doing work, what are the answers to any questions we wish to know about the universe. If we could know such answers, without doing specific work, then we would have no issues of indeterminacy or information entropy; we could effectively cheat the universe by knowing many of its secrets for free. If we imagine a universe where we could get answers to questions without doing work, then any such universe will not give rise to the need for intelligent life forms (intelligence would be wasteful in such a universe).

Life arises and changes from the interaction of positive information (always changing but potentially knowable facts) and negative capability and freedom of action (statistical deviance from the norm). If everything was predetermined and knowable beforehand then there would be no need for negative capability or freedom to deviate (including with intelligence). Deviancy, including greater intelligence, is an elegant and natural algorithm for a world in which, in practice, it is impossible to know all facts without doing work on each problem or to know which solutions will turn out to be better or worse than other solutions. In such a large and ever-changing universe, actual work and intelligence to solve problems will always be required.   

“But an inner voice tells me that it is not yet the real thing. The theory says a lot, but it does not really bring us any closer to the secrets of the Old One. I, at any rate, am convinced that He does not play dice.”6

(Albert Einstein)

I concur with Einstein, God (or nature) does not play dice. I prefer a card game example. The universe is the dealer of hands in a card game with a very very large number of cards. It may know all hands but makes universal rules to ensure that everyone else must play with only their hand and the cards they see as they are dealt. 

post image
Дмитрий Фомин (Dmitry Fomin), “Playing Cards”, Wikipedia, 2014, public domain.

Consider a pack of cards: if they are randomly ordered (shuffled) and then dealt out, what are the chances that 26 red cards would be dealt out first (in any numerical or face-card order) consecutively, and then 26 black (again in any order)? Very low (approximately 1 in 446,726,754,173,622).

What extraordinary negative entropy such a starting position would represent. Yet this highly improbable starting position for a pack of cards would be of absolutely no use or interest to a beetle, a blue whale or some humans. The simple fact remains that a shuffled pack of cards consisting of 26 red cards and then 26 black cards as its starting position when dealt is not in any universal sense more useful than a mixed pack (the statistically more likely starting position), unless you are a life-form that is betting on the colour of the next cards.

The universe appears to go to great lengths in its laws to avoid absolute pre-determinacy at all scales. In the example of the first 26 cards being red, whatever configuration the next 26 cards are in, we know – without doing any more work – that each card dealt will be black. There is zero entropy, in information terms, within the remaining pack with respect to the quality or value ‘black’. This move from very high initial uncertainty of the colour sequence to absolute certainty of the colour of the cards as they are dealt gives a simple general sense of entropy in information terms.

Perhaps probability is central to two of our strongest scientific theories (quantum and thermodynamic) because this is precisely how deterministic outcomes are avoided in our universe, not by random chance but by probabilistic outcomes.

This ‘truth’ must therefore show up in every field of action and information. It is the basis for ethics, evolution, freedom, logic, entropy, information theory and cryptography. Life and intelligence arise and survive in the gaps between causality and chaos, between usable energy and entropy (information uncertainty). 

Recognising the undecidability, incompleteness or indeterminacy that exists in the universe is not the same as stating that the universe operates on randomness. We can find answers to many questions, but there will always be a cost. The work required for the payoff of obtaining a solution is how the universe keeps all participants honest and ensures that no life forms have an absolute vantage point (since that is God’s or the universe’s sole preserve). The uncountable configurations of information and matter (microstates) – the total entropy – is how the universe achieves this and is why all attempts to reduce or quantify it completely must ultimately fail to be truthful.

Perhaps our perception of reality, our languages and our sciences help us to create useful constructs to make sense of endlessly interacting bubbles or quanta of possibility.

Every macroscopic system and its surroundings are characterized by probability distributions of microstate[s]”.7
(David Layzer)

Yet, we should be careful to understand the nature of the questions that we ask, to consider which questions may not be askable or answerable in binary terms or relying on signs. Perhaps, as the ancient scientist Gautama Buddha reflected, meditation on the deepest questions requires signless silence. 

post image
AI generated image

“[7.5 millions years later] … ‘Seventy-five thousand generations ago, our ancestors set this program in motion,’ the second man said, ‘and in all that time we will be the first to hear the computer speak.’ …‘All right,’ said Deep Thought. ‘The Answer to the Great Question …’‘Yes…!’

‘Of Life, the Universe and Everything …’ said Deep Thought.

Yes…!’

‘Is…’ said Deep Thought, and paused.

‘Yes…!’ ‘Is…’ ‘Yes…!!!…?’

‘Forty-two,’ said Deep Thought, with infinite majesty and calm.… It was a long time before anyone spoke.

Out of the corner of his eye Phouchg could see the sea of tense expectant faces down in the square outside. ‘We’re going to get lynched, aren’t we?’ he whispered.

 ‘It was a tough assignment,’ said Deep Thought mildly.

‘Forty-two!’ yelled Loonquawl. ‘Is that all you’ve got to show for seven and a half million years’ work?’

‘I checked it very thoroughly,’ said the computer, ‘and that quite definitely is the answer.

I think the problem, to be quite honest with you, is that you’ve never actually known what the question is.’” 8

(Douglas Adams)

Non-Duality

The seeming multiplicity of reality requires theories and frameworks (including ethical ones) that foster maximal compatible (structurally sound) diversity of opinions. In the book Ethics of Life, I proposed an experimental ethical framework focused on making justifiable relative ethical decisions, within the constraints required for actions that are consistent with, and required for, the proper functioning of the whole universal ethical system (Compossibility).

A screenshot of a video game

Description automatically generated

A ‘Newton disc’ or spinning colour wheel illustrates how colour is made up of a unitary oneness, that is, white light. At first glance, it may seem ironic that I am suggesting any ethical framework founded on the principle of universality or invariance would propose maximum ethical relativism. However, if we consider the invariance or ultimate ‘truth’ in ethics to be the white light (life energy) and the ethical degrees of freedom to be relative colours on the wheel (actions of life forms), it is hopefully easier to understand.

Apparent contradictions that often cause us conceptual difficulty may be a particular problem arising from a strong dualistic tendency in Western culture, particularly in over-simplified monotheistic religions. Non-dualistic thinking teaches that:

the multiplicity of the universe is reducible to one essential reality.

Non-dualism also requires its apparent opposite: dualistic thinking. When all perspectives are allowed their particular uses, together they get closer to expressing the ineffable nature of reality. Einstein’s understanding of the equivalence of matter and energy is so powerful because it sees through appearances to the underlying unity of matter and energy – whereas much of Western thought is and has been plagued by a tendency to treat different aspects of reality as essentially divided (e.g., Descartian mind-body separation).

Let us not beat around the burning bush. The attempt to separate mind and body has been used to label one good or holy and one bad or evil giving rise to some unspeakable evils by allegedly sapient brings. In the reckoning of our history of teaching love and compassion for one another, it has been perhaps the greatest error of thought and the greatest act of ‘sin’ perpetuated within Western religions. 

“You do not have to be good.

You do not have to walk on your knees

for a hundred miles through the desert, repenting.

You only have to let the soft animal of your body

love what it loves.” 9

(Mary Oliver)

In Buddhism, non-duality is associated with the concept of emptiness (śūnyatā). The ‘hard’ modern scientists and the mind scientists of old are essentially saying the same thing. We must always bring some holistic assessment and judgements to gain insight into our understanding of the nature of things. The systems and processes that give us life, life forms, logic and reason are not entirely hermetic and self-evidential solely within their own frame of reference. We must synthesise across different fields of enquiry using diverse means if we are to understand the complexities, chaos and harmonies within the universe. In Taoism, we have the concepts of yin and yang, which embody the unitary nature of the appearance of relative things. 

“the concept of oneness would instantly perish without its counterpart ‘duality.’ Both are provisional concepts … true non-duality is … beyond oneness.” 

Even non-dualism is a provisional understanding. Take the yin–yang symbol; you can see this by considering that both yin and yang include and define one another, but that together they express a oneness that is still separated from (i.e., does not contain) the greater space around it. There is no way to quantify what is boundless.

A black and white yin yang symbol

Description automatically generated
A sphere is able to withstand great pressure or force because it distributes force equally across its whole boundary. The sphere is therefore found everywhere in nature as the strongest 3D shape. 

Perhaps the formal similarity between a circle and a zero (0) is accidental – they do not appear to have the same linguistic root – yet it is interesting that the concepts of wholeness and emptiness reflect one another in their shape.

Underneath the appearance of all things – the interplay of time and space, energy and matter, order and chaos, knowledge and entropy – there is a unity of duality, zeros and ones, emptiness and fullness.  Theories of cosmic cycles point this way, too, with the end of this universal epoch likely resulting in a re-collapse of the great sphere or ellipse of the universe back to a singularity – before a new grand epoch begins again. Alternatively, we live in a fractal multiverse where each end of space-time here in this universe (every black hole) is a generative singularity for another universe. Ends are, after all, also beginnings.

For general artificial intelligence, we will need programs that can ‘think’ in non-dualistic ways. This means not ignoring or discounting a multiplicity of perspectives that may be valid (particularly if the matter in question is not a universal law). Conversely, they need to be subtle and intelligent enough to see through seeming complexity to the hidden, often simpler, core of reality or truth using logic, imaginative exploration, visualisation, poetry and philosophy. The ability for complex probability-space surveying gives AI the ability to winnow towards where answers may be hiding. AI may find new answers by not making human assumptions about what is possible. Reaching an understanding after such surveys is not however entirely probabilistic.

“[my] aim in philosophy [is] to show the fly the way out of the fly bottle”10

 (Wittgenstein)

The Limits of Language

“Words strain,

Crack and sometimes break, under the burden,

Under the tension, slip, slide, perish,

Decay with imprecision, will not stay in place,

Will not stay still.”11

(T. S. Eliot)

Language, like logic, has its own incompleteness problem. That is why some aspects of the recent Western tradition of the analytic philosophy of language have often been damaging to philosophy and ethics. This analytic philosophy has, in some respects, represented a regression to a time before the understanding of Buddha thousands of years ago.

Under the correct approach, we realise that the use of language presupposes, implies or assumes so many different things about the world that are not contained within the word or even the language. The concept of a ‘table’ suggests legs (though of course not all things with legs are tables) and chairs and eventually leads to everything that is not table (that is, everything else). It brings association with ‘wood’ and carpentry. This in turn brings craft, trees, clouds, earth and weather, bringing water cycles and molecules (oxygen and hydrogen), which brings stars, energy, entropy and gravity. And so on, indefinitely.

Pratītyasamutpāda … a theory about how we gain correct and incorrect knowledge about being, becoming, existence and reality. The ‘dependent origination’ doctrine … ‘highlights the Buddhist notion that all apparently substantial entities within the world are in fact wrongly perceived. We live under the illusion that terms such as I, self, mountain, tree, etc. denote permanent and stable things … ’ There is nothing permanent … no unique individual self in the nature of becoming and existence (anatta), because everything is a result of ‘dependent origination’ … there is fundamental emptiness in all phenomena and experiences.12

The word ‘table’ needs the concept ‘table’ and both concept and word need an intelligent life-form to arise from the clay by billions of years of evolution to talk about such things. Language can be useful, but it cannot be ‘true’ in its deepest sense – it can only be used to try to approach or approximate truth. The same can be said for many of our logical, mathematical or abstract concepts. There is no such thing as tableness in reality, it is just a more or less useful construct. Likewise, the concept of a set containing an infinite number of tables is meaningless if you cannot have an infinite number of any real things (since things are by definition discrete and not continuous). This does not invalidate the concept of boundlessness. Indeed even space appears to be atomic (in the old sense of not continuously divisible) at a sufficiently small scale.

No reader would think that their given name contains within it all that is unique about their genes (code), life experience, cultural interests, family history, desires, suffering and hopes. A human name is not an example of successful algorithmic compression of that human – it is an extraordinary abstraction much like defining humans exclusively by colour, class, race and gender. Language is algorithmically efficient but each word in itself is not necessarily so. Subconsciously, we visualise words in a space where they are each attracted, repulsed, related and coordinated to create a meaningful pattern. 

I’ve seen Plato’s cups and table, but not his cupness and tableness

(Diogenes)

Interestingly, abstract concepts like ‘tableness’ are very hard to define for computers using algorithms. Young humans seem to intuit ‘tableness’ without having to see a very large variety of tables first. This is the opposite of how computer programs learn ‘tableness’. Humans appear to be able to perceive the concept of an idealised abstract quality of which real objects might be seen as a projection, reflection or manifestation. In a sense, the word and concept or quality of ‘tableness’ is like a geometric object that exists in a different dimension or plane; our abilities allow us to intuit ‘tableness’ easily when presented with any new manifestation.  

Our ability to believe abstract concepts is both a blessing and a curse. It can be very useful; however, we are not always able to separate complex ‘reality’ from the potentially useful abstractions (religion, race, country, money). This gives rise to many horrifying actions against others, continuing delusions of strong free will and a dangerous misunderstanding of our specialness – all of which help us think that we are somehow separate from (not interdependent with) other life forms and somehow, uniquely, not part of the wider life force on our planet or subject to Earth’s capacity to support complex, large life-forms like us. 

Humans need to learn to distinguish concepts, abstractions and beliefs from the inexpressible reality of the universe. We need to come back down to Earth and understand our rightful place as just one part of it – not above it. Even in respect of some of our greatest achievements we are obstinately wrong-headed and suffer from a form of intra-species solipsism. Consider ‘our’ extraordinary achievement in leaving this planet and travelling in space. For a moment, meditate on the extraordinary improbability: a raw and tiny rock, hurtling through the heavens whilst growing a fragile, living skin; and that skin deciphering how to join the fiery firmament, how to leap between the stars … reaching out towards infinity… ∞. 

What need we of this misguided and petty concept of individual ‘free will’ in such a wonderful life and universe?  

Socrates explains how the philosopher is like a prisoner who is freed from the cave and comes to understand that the shadows on the wall are not reality at all. A philosopher aims to understand and perceive the higher levels of reality. However, the other inmates of the cave do not even desire to leave their prison, for they know no better life … For that reason, the world of the forms is the real world, like sunlight, while the sensible world is only imperfectly or partially real, like shadows.13

Aristotle disagreed with Plato’s Socrates, focusing instead on the specific features of nature and natural ‘real’ objects, that is, concrete particulars. Aristotle was an early empiricist who perceived ‘tableness’ or ‘treeness’ as something that is a part of each actual object. That is, each particular tree or table is not an imperfect projection of an ideal form. Each actual table or tree is just one of all of the objects (past, present and future) within the total set of actual tables or trees. All real tables or trees can be described as being within that ‘abstract’ set, but the abstraction is one of usefulness, not truthfulness.

Benedictus Spinoza takes the concept of individual material forms versus the energy from which forms arise to a higher level in his Ethics. Individual bodies having the same essential attributes (what they exist as) are not separate but are all part of one substance. He calls each individual manifestation a ‘mode’ of that unitary substance, having a difference in extension (physical configuration) only. Each individual body has an essence which is its inertia to destruction, its persistence of shape or form over time that requires energy to maintain or dissipate.

For Spinoza, the ultimate reality (being God or the nature of things) is not material though it may manifest in infinite attributes; it is a self-caused substance. In respect of the difference between things and the ideas of things (such as a ‘table’ and ‘tableness’), Spinoza contends that these have different chains of causation (parallel streams) in thought or conception and in extension, but they are really just different attributes manifesting the same underlying immaterial reality. Spinoza was also a free will sceptic.   

Language is a communal tool that suffers from very significant compression and loss of fidelity in its use to express the ineffable, giving rise to many of the problems we face when seeking to get at the root of matters (a type of inverse holographic principle). Language requires significant ‘unpacking’ and interpretation at the receiving end of any communication and great care by the sender – given all that is not said but that is necessarily implied in or carried with each word or concept. This difficulty has been one of the key challenges facing the development of natural language AI tools and the use of ‘Transformers’ has been key.

post image
Vincent Van Gogh, Tree Roots – Wikipedia

My favourite painter is Vincent Van Gogh. Vincent was an extraordinarily compassionate artist. As such, he had no desire to render a simplistic, lower-dimensional picture of a tree which we could better see with our own eyes. He was trying to help us glimpse at the much richer life force that animates the tree and all other life forms – the invisible energy and visible form combined – his brush strokes, so rich and heavy, magically conjuring the hidden life force into being. 

Van Gogh learned the language of trees and it is without words.

It takes a kind of ‘magic’ to do this. For this reason, great artists, scientists and holy people seem like beings from outer space or the future. In the modern world, we have lost sight of the meaning of the word technology (from Greek techne, “art, skill, cunning of hand” and logos “to speak of”), assuming that it can only apply to things like computers, machines and industrial inventions. Technology is much broader than this. It is the systematic practice (application) of arts, crafts, and knowledge – science in its broadest sense.

We all know that Art is not truth. Art is a lie that makes us realize truth, at least the truth that is given us to understand. The artist must know the manner whereby to convince others of the truthfulness of his lies.14

(Pablo Picasso)

Technology is, therefore, any cultural practice and collective skill that enables us to understand the nature of things more clearly, to do useful things and to avoid bad practices. We have forgotten to respect, maintain and seek to evolve our technology of the mind. We cannot compress the universe’s inexpressible reality into highly compressed finite dimensional space (words, numbers or pictures) without losing much of the meaning of what we are trying to express – or risking confusion between the expression and the reality it is trying to express. 

The cultural technology used by humans – but which they did not invent – that gets closest to expressing the truly ineffable is music (some mathematicians make a similar claim). Perhaps it is the underlying abstract mathematical nature of harmonies, the precise interrelations between frequencies (the logic of music) coupled with the emotion of the sounds created by such imperfect vessels (animals and their instruments used to make sounds). Perhaps it is also the lack of a specific compression (encoding) and interpretation (decoding) process.

Music allows for lossless communication between composer, singer and listener – albeit music and maths are still just a more or less useful finite reduction or representation of the ultimately unsayable. 

“…only silence and music

can delve into the abyss 

and return unscathed

yet try we must 

again again and again15

(Peter Howitt)

Poetry can also sklent at this higher reality, with its use of pregnant possibilities, uncertainties and seeming contradictions. Paradoxically, poetry groks and speaks the unspeakable.

“and I made my own way,

deciphering

that fire,

and I wrote the first faint line,

faint, without substance, pure

nonsense,

pure wisdom

of someone who knows nothing,

and suddenly I saw

the heavens

unfastened

and open”16

(Pablo Neruda)

Footnotes

  1. The Hitchhiker’s Guide to the Galaxy‘, 1978.  ↩︎
  2. Shadows of the Mind’, 1994.  ↩︎
  3. ‘ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ENTSCHEIDUNGSPROBLEM’.  ↩︎
  4. Geometry and Experience, 1921.  ↩︎
  5. Matt Parker, ‘Win a million dollars with maths, No. 2: the P v NP problem‘.  ↩︎
  6. “Letter to Max Born’, 1926. ↩︎
  7. Cosmology, initial conditions, and the measurement problem‘, 2010.  ↩︎
  8. The Hitchhiker’s Guide to the Galaxy‘, 1979. ↩︎
  9. Wild Geese‘, from Dream Work, 1986. ↩︎
  10. Philosophical Investigations’, 1953. ↩︎
  11. “Burnt Norton”, Four Quartets, 1941. ↩︎
  12. Pratītyasamutpāda, Wikipedia. ↩︎
  13. Allegory of the Cave, Wikipedia. ↩︎
  14. “Picasso Speaks in The Arts”, 1923.  ↩︎
  15. ‘Some Solace’ ↩︎
  16. ‘Poetry’, Selected Poems, trans. by Anthony Kerrigan, 1993. ↩︎

Discover more from Compossible – that which can live together

Subscribe to get the latest posts sent to your email.

Leave a comment