Tautologus Rex

H!

By Tom Veatch

Version 1.4

 

Logic and Her Daughters

Evolution, Cognition, Emotion.

Introduction

When I was an undergraduate in 1983 I served as the T.A. for Stan Peters' graduate level class on logic (and math) for linguistics at Stanford. So for 40 years, off and on, I have been stewing in, and taught, and had a sense of responsibility for, the fundamental ideas discussed here. It seems things have settled, simplified, become more clear, so I want to share (not just rant).

Looking back, years later, my friend, Mohammad Abd-Rabbo, the Palestinian linguistics professor, said this was the most boring and useless class in his entire career. I give him some credit because Tautology, my subject, can be understood as meaninglessness itself, and also because Symbolic Logic can draw one into incomprehensible thickets of technical noise with a gradually-approximating-zero signal to noise ratio. Similarly, Aristotle's syllogistic, itself the basis of Greek, Roman, Arabic, and medieval logic, is both mere tautology and a thicket of technical noise enabling, with difficulty, formal and correct, non-trivial and bulletproof, reasoning. As I show here.

But if you will be patient, I hope you will soon find traction, and an accessible easy path to true insight, from which you start to see vistas opening. Many vistas, for science, and for yourself.

I'm arguing for a somewhat unfashionable, but radically ambitious view of logic. I will shoehorn this view, using the slogan "H" for H Notation, onto various topics like evolution, psychology, and human evolution, hoping to favor simplicity, understandability, and true insight, in all these fields. So put your seat belt on.

The Queen of All Knowledge

My argument starts with the dominance of logic. Logic is the king and the queen over math, physics, and every science. Why? Because Tautology is its key and foundation. Tautologus Rex!

My first Penn phonetics professor, Leigh Lisker, told me, The less you say, the less you lie: the less you are wrong. At the time I thought he was telling me to shut up, but later realized it was more than and different from that.

Tautology is just repeating yourself. As in 1 = 1. Simple. Nothing to it. Yes, on the one hand, tautology says nothing. On the other hand, look: it cannot be denied!

It is like the water way of the Tao, it is has no more substance than an imagined cloud, and yet it is more powerful than rock, or steel, unbreakable. It is trustworthy, for the patient. But these enthusiasms are for you to discover, not for me to say, or sell.

I believe: Logic, math and 1=1 are not functions of time; they are true before and after now, before and after any now, before the now of the first beginning at the big bang and after the now of the final end in the universal maximum-entropy dissolution. Before and after the origin of life, and irrespective of any mind that knows it or doesn't know it. Describe some process today, with truth, irrespective whether an instance of it occurred yesterday, today, or tomorrow your description IS, WAS, WILL BE true -- for all time. Define something today, your definition applies outside time, to things your eyes do see, or will see, but also to things in the past you can never see again. Tautology is automatically true; and truth is outside of time.

As an example, consider logical possibility in the case of repeated evolution into an unoccupied niche, as in the Galapagos finches, or post-dinosaur-apocalypse predation, etc. You have a species B that evolves to fill a niche which was previously or elsewhere filled by another species A. Now, did the evolutionary niche exist before B evolved to fill it? Obviously Yes, because A had filled it before or elsewhere, so of course it existed, as a possibility. Well then did that niche exist before A filled the niche? Well there is nothing different between A filling it and B filling it, so if it existed before B filled it it must also have existed before A filled it. Therefore evolutionary niches exist, let's say in possibility, or in logic, BEFORE they are occupied.

Does not the same argument apply to life and physics and everything? Was life possible before life happened? Yes it was, otherwise it couldn't have. Do the laws of physics, where they are universal and true, that is, which are merely definitional, describing tautological logic amongst terms, do these laws not apply outside of time itself, and therefore also inside of time and therefore before, during, and after the Big Bang and everywhere in the universe? Yes, that's what universal means.

Abstraction in nature

The relationship between language, or let us reduce it to, abstraction, and reality is the fundamental mystery here. But that's not so complicated. Two things in the real world that are similar share some sameness about them. When multi-use, or shall we say, effective mechanisms (since without their ability to be used consistently or repeatedly or predictably even within slightly-variable circumstances we would hardly call them very effective), when they operate, they obviously operate over the multiple things or circumstances in the real world for each of which they are effective. But this is nothing but abstraction: the class of things picked out by the mechanism over which it can operate is thereby no less than an abstract category of things, an agglomeration of similars into a category each member of which can be treated the same, which is treated the same, by that mechanism. If the mechanism is nuclear or electrical or gravitational attraction or repulsion or quark interaction before the big bang, its sameness of input/output relations makes it an abstraction, a thing that applies in a general way.

Okay then, yes, if you have a general mechanism in nature, it IS itself an abstraction, by definition because of its generality. Despite diversity, nature does happen to be full of similars and samenesses, of consistents and repeatables, and those are the parts and aspects that organisms, and we, are interested in and need to know or be able to control, being as we all are indeed organisms with, usefully, a will to power, that is, a will to the control of general mechanisms. So if we humans, as an evolved, social, teamwork type of species, have evolved to control and use an inventory or library of abstractions in a lexicon of signals that we can communicate to each other, that makes a lot of sense. Contemplate the opposite: for us to be unable to detect sameness, in general, would be quite the detriment to our capabilities, or to be unable to communicate them, that would limit our capacities for social achievement as well.

By these links, abstraction connects to reality on the one hand and lexicon on the other hand. I haven't, indeed, derived syntax from first principles, no, but animal cognition, and the cognitive capability to reason tautologically, to apply logic, I think we have enough here to connect them: abstractions, minds, and nature itself, in a tight circle of mutual dependency, if not perfect equivalence.

We happen to have the right kind of minds, we'll get to what that means later, but even alien minds must share logical reasoning.

Let's try an example. Two 90 degree rotations (in a plane about a center) are the same, in a sense, as one 180 degree rotation. Does this depend on human cognition, or the words or sentence in which it is expressed? Does such cognition depend on or require a symbolic language to capture the insight as mathematical expressions? I say No; rather, cognition itself depends on, itself perceives, the logic itself, or some abstraction of physicality from which this logical equivalence is derived.

My purpose will be to deconstruct logic into tautology, then expand this reliable zone of perfect understanding out from there.

Formal Logic as Tautology

Even so-called rules of formal logic themselves reduce to tautology.

  • Take negation. A != NOT A (we like our symbols, so we will write A ≠ ¬ A) seems like one expression but it says the same thing twice. Something that is NOT A, well, that is not (equal to) A. H. Have we repeated ourselves again? Not exactly, perhaps not entirely, yet certainly in some important essence, we have said the same thing twice. We operationalize this in the truth table, T for true, F for false, and all the conceiveable possibilities each in a different row.
     A  ¬A  ¬ ¬A 
    TFT
    FTF
  • Take double negation. A == NOT NOT A. Indeed, in logic, A is an expression of, in a way, all those circumstances in which A is true. And NOT A is the opposite, and the opposite of the opposite is the original.
    Cuteness is evoked. To read your back, look in two mirrors, no longer the (single-mirror) opposite of yourself. See your hair parted on one side in the mirror? Have you gotten used to seeing yourself that way? That's the opposite of what others see on you. Others would be surprised to see you as you do, oppositely. Opposite of opposite orients finally as the same orientation you started with. Like double negation.
Back to logic, everything logical about A is retrieved, after converting A to NOT A, by converting NOT A to NOT NOT A. So A == NOT NOT A. This is how logicians think; it's called the law of the excluded middle, asserting that there's no in between thing that's neither A nor NOT A. Based on that, A == NOT NOT A follows.

Here's the shocking lesson I want to draw. If B = NOT NOT A (== A), then NOT NOT B == NOT NOT NOT NOT A. In fact there's no limit to the number of equivalent expressions, (NOT NOT)^N A, for all N >= 0. This is quite curious, quite general, quite basic. It's a basic consequence of the separation in the linguistic Sign between the represented and the representation, between the sign and the signified. There's one thing conceptually in your understanding, and there's an infinity of equivalent expressions for that one thing which could potentially be written down that are logically identical.

This is the trap of linguistic gamesmanship, and the problem faced by those who think the expression is equivalent to the intended idea behind that expression, those who confuse Symbolic Logic, for example, with Logic. In different ways, expressions can capture insights about ideas, even reliable insights about ideas, but there is no such thing as the unique correct expression of an idea.

Feynman himself interrupts his Nobel Prize speech to grope toward the same insight, if these can be merged. He says:

I would like to interrupt here to make a remark. The fact that electrodynamics can be written in so many ways – the differential equations of Maxwell, various minimum principles with fields, minimum principles without fields, all different kinds of ways, was something I knew, but I have never understood. It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don’t know why this is – it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn’t look at all like the way you said it before. I don’t know what the reason for this is. I think it is somehow a representation of the simplicity of nature. A thing like the inverse square law is just right to be represented by the solution of Poisson’s equation, which, therefore, is a very different way to say the same thing that doesn’t look at all like the way you said it before. I don’t know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.

And the great Kolmogorov offers a solution.

Kolmogorov suggests we look instead for the shortest correct description; that's helpful, though it assumes a mystery representational context, which might not be known or available -- or even exist, if multiple axiomatic foundations could lead to equally minimal description lengths.

We may start at an inner concept or understanding at the mental/cognitive/semantic level where in our insight-holding, perhaps Kolmogorov-simple, minds, a humanly-understood true idea actually exists as the one thing it is. Then we may draw that out into an externalized form as an expression that captures some communicable essence thereof, in which we seem to have lost uniqueness, entered the zone of blather, the forest of multiplied equivalences, we have started to confuse ourselves with tree after tree of tautological correspondence and derivation. Sometimes this is useful, sometimes not. Simplicity of form, and evoked inner insight, are valued here.

I want you to draw this lesson: the idea is the thing, not its expression. Even though expressing an idea, in one way or another, is the only way we can communicate it to someone else. (Possible exception: those ideas which are somehow previously shared by nature or nurture into the heads of sharers, and arise relevantly in shared contexts.)

Now we examine Symbolic Logic, which is of current fashion ever since Frege in the 1880's. In Symbolic Logic, we assert various Forms or Symbols as expressions of Logic along with matching sets of Rules whereby given one or more expressions that may include those symbols, it is permitted to write additional expressions; this is called formal reasoning because a text editing computer algorithm could implement the rules correctly. The forms would be: T(rue), F(alse), ¬ → ∧ ∨ ∀ ∃ and not too many more. (If we go back to Aristotle, then add 'a', 'e', 'i', and 'o', and 19 syllogistic moods a.k.a. methods, all of which are tautological as I have shown; much simpler than the reader might initially believe.)

But don't be fooled. The logical truths expressed in these various symbol systems are pulled back out of the page into our heads, or pulled out of one expression into a different equivalent one, the whole game being to find different, hopefully usefully different, expressions to mean the same thing, because sometimes we can reason ourselves into one expression or another, and so we need multiple different ones. (Which are the same.) It's Tautology, but that doesn't mean it's not Useful.

When you learn Symbolic Logic and see the insight of how two different expressions are the same, then you can store a rule of correspondence into your mental (or computational) library of equivalent correspondences, and pull them out in the process of trying out different derivations, wandering through the forest of expressions, possibly to find an expression of something that seems new and different.

Because some consequences are surprising, like the 100th prime number, we might be surprised to discover what it actually is, we maybe didn't know it in advance, even though in some kind of essence we did know it. We knew enough to derive it, by reasoning step by step, from counting to adding to multiplying to dividing, to checking a list against another list and finally coming to what had to be true tautologically that we didn't know before. Tautology, knowledge with complete certainty, as we draw out the web of knowledge more and more, might be quite surprising and previously unknown.

So here's more Symbolic Logic, and how very tautological it is.

  • Take → for example (pronounced, "implies"). A → B is the same, logically, as ¬ A ∨ B.
     A  ¬A  B  ¬B  A → B  (¬A) ∨ B 
    TFTFTT
    TFFTFF
    FTTFTT
    FTFTTT
  • Take DeMorgan's Laws, for example, (NOT A) OR (NOT B) == NOT (A AND B), are just different formal or notational ways of stating the same identical truth table.

     A  ¬A  B  ¬B  A & B  ¬(A & B)  (¬A)∨(¬B) 
    TFTFTFF
    TFFTFTT
    FTTFFTT
    FTFTFTT
    The truth table expresses all possible combinations of true and false for the basic propositions A and B, so it can be used to reason over every possible case about derived propositions being true or false. If two expressions are the same as regards truth value in all possible circumstances, they are logically equivalent. Diversity of form does not imply diversity of meaning; different, even complex expressions can often reduce to the same thing. For example A = ¬ ¬ A = ¬ ¬ ¬ ¬ A = ... may be extended without limit, preserving truth conditions. You might express a truth in one way, only to find other expressions equivalent, equally valid. Tautology licenses variety of expression.

  • Take Modus Tollens, a.k.a. Modus Tollendo Tollens, The "Method of Removing by Taking Away". This is not quite as tautological as it sounds, at first, because it's removing P by taking away Q, which does seem to say something rather than nothing. But let's see if it doesn't reduce to a tautology even so. Modus Tollens says:

     1)  P⇒Q  given
     2)  ¬Q  given
     3)  ¬P 1,2, Modus Tollens
     

    , that is,

     P  Q  P⇒Q
     T  T  T  Ruled out by (2).
     T  F  F  Ruled out by (1).
     F  T  T  Ruled out by (2).
     F  F  T  The only remaining possibility.
    We consider all possibilities. Three are ruled out by the premises. The only one remaining, in which P⇒Q is true, and ¬Q is true (Q is false), is the last row, with Q false, and P also false. Therefore P is false given the premises. Indeed, the meaning of the premises, is restated in the conclusion, namely, that none of the first three rows holds, or to repeat, only the last row holds. The conclusion says nothing different from what the premises together say. Tautology. QED. H.

  • Take Modus Ponens, the rule that from separate propositions, P⇒Q, and P, one can validly derive the proposition Q: this is the mere definition of the implication notation "⇒".

     1)  P⇒Q  given
     2)  P  given
     3)  Q 1,2, Modus Ponens
    We understand that the meaning of ⇒ is that its antecedent (P) "implies" its consequent (Q); but this isn't just a reference into the related truth table in some footnote; it actually justifies deducing from the antecedent to the consequent as a valid statement within a proof of logic, and as a necessary and true assertion in any conceivable world that satisfies the axioms of that logic. That is, "implies" has one meaning, operationalized in two ways (thus tautologically, as different forms of the same thing). After all, the context of interpretation of the truth table is the same context of interpretation as the derivation: if (P⇒Q) is true in that context, it MEANS that P actually implies Q, that finding P to be true (asserted in a proof) justifies inferring Q (asserting Q in the proof).

    Operationalized differently, as editorial computation, consider the truth table below, which enumerates all possibilities, and then adds P⇒Q according to the definition of ⇒. The following numbered notes referenced in it: (1) the assertion, P⇒Q, crosses out the line of the P, Q, truth table in which P⇒Q is false. Next (2), the assertion, P, crosses out the two lines in which P is false (because by asserting P we deny that P is false), leaving one line in which P and P⇒Q are true, and in that line, that remaining alternative, (3) Q must be true. All alternatives having been ruled out, Q must be true.

     P  Q  P⇒Q 
    TT3T
    TFF1
    F2TT
    F2FT
    These operations on the truth table lead to the assertion of Q, and that's why Q can be added to the derivation in the proof after P⇒Q and P. The sameness of truth in the truth table and in the formal proof is why Modus Ponens is tautological.
Where THIS and THAT reference the same thing, THIS=THAT both says nothing and is absolutely true: it is tautological. Thus also Definition = Tautology.

Newton's Tautological Reasoning

Apply "definition = tautology" to Isaac Newton. We know he knew about the hourglass, the yardstick, and the scale, and that there he was, speculating about the laws of nature, about different physical relationships. Under the mythological apple tree, he considered what he did know, measureable qualities starting with 1) weight (with the scale), 2) change in location (with the yardstick) and 3) time (by the hourglass), therefore in principle he could certainly define and measure change in location divided by time i.e. 4) velocity, therefore also (since you can measure velocity, you can also measure velocity twice, and then find the difference, which is how you measure ...) change in velocity over time i.e. 5) acceleration. So, W for weight, let's say, and A for acceleration.

Now the clever bit is, he decided to relate weight (under gravity) to acceleration by a discovered, no, an invented, no, a proposed, a DEFINED scaling he called mass, by saying M≝W/A, where here '≝' sign indicates a DEFINITION, or as we write it W=M*A, where here the '=' sign rather conceals the definition in the equation. So in the context of terrestial objects, it was already a tautology to start with, the definition of the mass of a thing is its weight divided by its acceleration (under gravity). The idea is, if you take the movement out of the weight, you get a degree of non-movingness, of deadness, or unreactiveness, or uninfluenceability, a quantity of resistance to acceleration, that is its mass. So far so good, so pleasant, so firm: we believe it because you can define anything you want, and it'll always be true by definition, in its imaginary definitional world. One can imagine how he was diddling around idly with the units for different kinds of things, and making up imaginary relationships to see where they lead, a formal game of units manipulation under the rules of tautological definition-making.

But today we stand in awe of this quite especially useful definition, we call it Newton's amazing universal Second Law of Motion, that miraculously seems to apply equally throughout the universe, not just to an apocryphal apple bonking dear Isaac on the head in about 1666, but also over there, out to our planets, far out to other stars, and down inside here also even to molecules and atoms, and in time from the first beginning to the last end, or beyond. Was Newton's miracle the empirical universality of this law? Or was it just a definition, where as we know a definition is necessarily universal?

It is universal because it is a definition, which is to say, a tautology. Because of course M is DEFINED as F/A (in the context of gravity F=W; that is, an object's weight is the force it exerts being pulled down by gravity). A definition holds conceptually, that is, outside of time and the mind that conceived it: as a tautology, it asserts nothing, and saying nothing, it cannot be false, it is everywhere and always undeniable, true. It is like a perspective, come over here and look at it this way, then you'll see how things line up in this view. Every perspective is true; it's not the perspective that can be false but well or ill-observed conclusions of fact that may be drawn from viewing the world from that perspective.

So the miracle is not that F=MA is true, but that it is useful. Where gravity is disregardable, mass still is measureable by applying non-gravitational force and seeing how much is needed to push the thing around. Nowadays we think the concept of mass that Newton defined into existence is as real as, or more real than, the measurements he started with to measure and define it. Flip a cognitive switch and mass becomes the primitive characteristic out of which complexities like force under gravity are built up, rather than the other way around. Think of it this way or that, however you find useful, the truth is it became true by tautology, by logic, the queen, the king of science, before and after, outside time itself. Tautologus Rex.

Plato awakes.

  • Similarly (logical) possibility is different from and precedes (in time and outside of time) both realization-in-reality and realization-in-subjective-conception. (See above w.r.t. ecological niches.

Lazy Equals, or H notation

(See also here).

I like to say I'm a lazy boy, very lazy, where a choice exists. So here's a lazy boy trick. When I'm doing math, almost all the work intellectually is writing down what I know as an expression or equation and then thinking about it, primarily by substituting equal things into the expression. You can always substitute equals for equals and the result always is an equivalent expression, equally as true as you started with, because you could in principle substitute backwards, too, and get to the same thing.

My problem is penmanship. I suck at it, and I hate it. I have to re-write the whole equation all over and then over again when all I did was another tiny substitution of some part of it with something equal to that part of it. Maybe it's a very long equation, and maybe I have three or four parts to substitute, or maybe some parts get substituted time after time. And it makes for a lot of stupid, stupid work doing copying, when the only thing I really need to re-write in full is the last version at the end, when I want to talk about that one. So instead I just use what I call H-notation: It's a Lazy Equals sign (remember "Lazy" applied to a cow brand means a 90 degree rotation), an H.

H-notation is just my way of writing an equals sign vertically. It's not a curly brace! Curly braces suck: they are SO hard to draw attractively when large, or with economy of space on the page. Curly braces have a different, more vague, meaning. Curly braces pick something out for a generic comment, like circling something. The comment might or might not be a substitution of something equal to the marked bit, whereas H-notation means equality. Just because it's Lazy, it's still an equals sign. Just because it's partial, it refers to a part of an expression, it still asserts equality between the above and below subexpressions that it connects.

Here: Draw a super-wide H below the part you want to say something about, and write what is equal to that part below the H.

 ...   A   ...
     |---|
       B
Obviously, A being equal to B, the expression ... A ... is equivalent to the expression ... B ..., because you can substitute B in for A.

Thus H-notation lets you substitute equals for equals within any expression or equation.

  F     =  m  *         a
|---|            |------------|
  Fg               32'/sec/sec
For example, above, Newton's Law F = m * a, in the context of an object under gravity, becomes the statement that the force of gravity on that object is the mass of that object times 32 feet per second per second.

You can use H-notation as often as you want on either side of the equals sign, and you can do it in columns too.

I find that about half of my math homework problems go away, when I don't do the recopying and just use H notation. Try it, you'll like it!

Incidentally, the way real mathematicians seem to solve this is to skip or hide all the step-by-step reasoning and substitutions, and just write the last equation. If you ask them they say that part is "obvious". Because it's job security for them, you see, the more obscure the reasoning is, then the harder everyone has to work to keep up with them, and then they seem to be so much smarter than everyone else. That method works in a bunch of professions, like law, medicine, and even plumbing. But I think it's obnoxious, especially when you're teaching, and math is (or should be) nothing else but teaching people.

Write it down, see what comes out, that's how you actually teach yourself, and that's how you also teach others. Surprise! Language works! Who knew!

Dearie, we never told you this but Grandma Della had a secret baby named Robert with that roughneck Joe Smith before she met and got married to Grandpa Fred and had Helen and Fred Junior and John. But Grandma never told Grandpa she had a boy already, and made Joe raise him on the other side of town. So then Bob grew up and met your Aunt Helen, and fell in love and asked her to marry him. But after we explained the facts, she told him No. Because Bob's your uncle.

Tautology and its Uses

Another way to say "H notation" is to say "Tautology". Like A = A is a tautology, you can also say

   A
 |---|
   A
Because it's the same thing. H.

Tautologies are statements that are true automatically. If something is true by definition, like The sky is above or The earth is below, then they don't actually say anything. Of course it's true: Duh!

Actually it's not duh!, you can discover things that are very surprising using tautological reasoning, or H notation, or mathematical substitution, for example that those three things are the same thing.

You philosophers will recognize H as a Kantian analytic judgement: the consequent is contained in the antecedent, as contrasted with synthetic judgements in which the consequent contains something more that is added to the antecedent.

Axiom; Definition; Category (and Structuralist Thinking); Observation; and Tautology

Axiom

An "Axiom" (Greek "worthy") is a statement you assert without any justification, just because it is so obvious you can (and later you'll find you must) believe it without any other argument or supporting explanation. For example, in geometry "Given two points, there exists a line containing them", is an axiom.

Axioms provide supposedly solid ground (you have to evaluate them to see if you believe them, yourself) which then let you reason about stuff.

Sets of axioms together define the logical basis of domains of math or knowledge such as geometry or number theory, etc.

Sets of axioms compete to be the minimal set of axioms from which the knowledge in some topic can be derived. (See my essay contra Hilbert Negative Dimensionality, for a list of redundant and excessive geometrical axioms which, I point out there, can be reduced and simplified.)

Sets of axioms fail when they are found to be mutually inconsistent. Inconsistency is when you can derive both a proposition and its negation, within the system. For example, Schopenhauer destroyed Hegel's philosophy when he proved that any proposition AND its opposite could BOTH be derived from accepted axioms using canonical, correct procedures of Hegelian reasoning.

Veatch's Conjecture: I have long thought that the same as Schopenhauer did to Hegel could be done to the deep abstractions combined with standard structuralist methodology as used in theoretical syntax and phonology in the field of Chomskyan linguistics. I conjecture as follows: Any assertion about a deep abstraction in formal linguistics might be proven, including its opposite, by using the standard structuralist methods while simply modifying enough supporting assumptions about the meaning of other elements of assumed theory so that that assertion can be made to follow consistent with the data. It was this apparent freedom of conceptual replacement of basic ideas with their opposite which I observed on the part of great formal linguists like Paul Kiparsky, which repelled me from those subfields of linguistics. In syntax and phonology in particular, the distance between the abstract primitives of theory and the concrete actual observations is so very great. If any assertion, including its opposite, may be derived from the same set of facts using this method, then nothing is trustworthy: there, all is pointless. This might force such linguists to make fewer assumptions in their various theoretical approaches.
Inconsistency is a bitch. From a contradiction can be derived any proposition.
Proof: Let B be any proposition. Then:

1. A ∧ ¬ A   (Assume any contradiction, both A and ¬ A)
2.A ((1) and definition of ∧ ("and").)
3.¬ A ((1) and definition of ∧.)
4.A ∨ B ((2) and definition of ∨ ("or").)
5.B ((4) and (3) and definition of ∨.)
Therefore beware of inconsistency, or you will spout every nonsense imaginable. Do I do so? It is your job to evaluate this.

Definition

A definition is an Axiom that asserts substitutability of two expressions. It is really a domain-specific rule of inference; given a definition A ≝ B, and an expression ...A..., you may infer ...B.... A definition is like a perspective. If you look at something from a certain perspective, such as using a different label for that thing, which might free you from non-essential connotations and let you see a deeper insight, maybe you can have a simplified understanding of things which could be complicated without that definition.

For example, using polar coordinates, radius R and angle θ, is a different way of thinking about the same geometrical things as using Cartesian, rectilinear coordinates x and y. You can define R,θ as $$ R ≝ \sqrt{x^2+y^2};     θ ≝ arctan(x/y)$$ Then for some purposes everything will be simpler, like understanding systems of gears, like bike gears or automobile transmissions, where rotation is more important than translation in the X or Y dimension, for example.

Often a definition gives a simple name to a complex configuration. This will give you mental rest and an easier time of representing that configuration as part of even larger and more complex configurations. Chomsky refers to this as "Merge" when used on the fly in the creating and understanding of complicated sentences. "The pitcher who blah blah blah pitched to the batter who blah blah blah." There can be a lot of detail in the blah blah blah part, but you get to reason with your limited mental capacity at different levels first by submerging the details in the higher level relationship, and once you have that squared away, you can descend into the details, and then all the things that were said in the complicated sentence, you can have thought about in some comprehensive and easy order, so that you can end up thinking very complicated thoughts after all. Definitions provide this service in a general and reuseable way.

Definitions can be bad in two senses. First, in the context of a set of axioms a given definition might produce inconsistency in reasoning, which would throw the whole system in the garbage, so definitions, like axioms, should be carefully constucted to prevent inconsistency. Second, they may not be useful, because they don't have traction in the current conceptual world, or the real world; they might not make things simpler after all. "Witchcraft is defined as magic carried out by a witch or warlock" might make the world more complicated after all, and might not be that useful when you think about it and draw out what it means when combined with everything else you think you know. But some might enjoy imagining imaginary worlds in which an otherwise useless, tractionless, world-complicating definition provides a key, titillating, even merely fun perspective within that imaginary world.

I'm saying that a definition cannot not wrong or false. It is just a perspective in the universe of all conceiveable worlds. It might be a bad definition, producing inconsistency or complication rather than traction, simplicity, and insight, but if you want to take that perspective, who will deny your rights to self-contradiction, to imagination, to superstitious bullshittery. On the other hand, we value useful definitions, which are consistent with the rest of what we know, and help us to reason about things.

Concept, Contrast, or Category

It's time to assert some axioms and definitions so we know what we're talking about in our world of ideas, and whether our ideas connect to the world or not. We will poke at this painful weird idea, the difference that makes a difference, the schema of things which we use to infer the reality of some thing, characteristic, or process.

The approach, ontology, and epistemology, here, is scientific and structuralist.

Let's try to assume nothing but the concept of difference, which if you can put your mind into a structuralist framework is itself a difference, specifically the difference between sameness and difference. For this concept to be real and reliable requires only the (2) reliable (1) observation of (3) differences. H.

Let me define (1) observation as some interaction with the world with some particular and detectable result, which for a learning organism might be possible to represent in some way. It's pretty general, anything can be an observation. I'll give you a longer paragraph on Observations below.

(2) Reliability is the curious empirical finding that we find the same things over and over again. Look for a red rose, you might find more than one. The world happens to be full of things that can be considered to some practical degree as more or less the same. Once we find more than one of them, that is, we find at least two of them, then we have already established some kind of sameness, otherwise the two could not be counted together. Given two same-enough's (we can call them "sames"), we can keep on going, keep on finding sames again and again, and this is what we call counting. It's all based on sameness, from which we derive repetition of sameness, from which we further generally derive this concept of the reliability of observation.

(3) Finally, difference. (Let me refer you to a fun Essay on Two's -- a difference asserts two sides of a categorization. H!) Nothing is added to the observation of a difference when we say there is a category there. The observed difference, if it is a difference that makes a difference, becomes food for theory: it can be put into language, or added into a cognitivist theory of lots of things, or called real, by asserting that some true category really exists that underlies the difference. But it's only a difference, nothing more or less, except that it's a difference that makes a difference, which is how we know it is really there, and for shorthand we think of that first difference as two categories in opposition, or one category and its negation or opposite, and the second difference as the evidence for the reality of the first. If we had in our hands a difference which failed to make any difference, then even if we believed in it, we couldn't persuade anyone that it is actually there. Nor would it be actually useful, because it brings no further reality-predicting knowledge with it beyond the mere pleasant fact of your possibly playful enjoyment of the category you seem to be pointlessly imposing upon the world — because it doesn’t make any difference. So it's not just a difference (because that might be imaginary), but a difference that makes a difference -- so that we can prove it is real.

May I try to reduce this to a logical hierarchy of conceptual dependency? It would be nice to have, like, axioms, for this way of thinking, and then be able to derive results from those basics. Like, can we derive difference from sameness? Reliability from anything else? Observation from anything else? Is one of these more fundamental than the others?

I'm finding this to be difficult. Difference, for example, is mere failure of sameness. We might think difference is the ground out of which sameness arises, since being able to categorize and count would seem to be miracles of cognitive systematization evolved late in the evolution of organisms. Starting ignorant of everything, fresh of mind, without traction on the world, incapable, things all seem different. Without clarity, we lack proper categorization, every new thing is not the same as its predecessor, and if we cannot find a way in which they are the same, then this is what we call, different. It's like being on drugs, observing the flow of experience without a conceptual structure imposed or available. Or perhaps, like a wide-eyed learning infant. As we learn, we go from difference everywhere, to more samenesses as we gain traction on life.

Or perhaps sameness is the ground out of which difference arises, considering that every process amounts to a categorization machine, which categorizes its unknown-variability input sequence into those inputs (or constellations thereof) that trigger the process and those which do not. Well, indeed quantum physics, physics, chemistry, biochemistry, cellular biology, organismal biology, species-level environmental reactions, even Machiavellian politics, wherever there is preductability, everything we ever thought about in the natural universe, all and each, have this quality of essentially carrying out categorization on inputs. Everything that does anything, as a matter of reliable knowledge about that thing, can be seen as a categorizing-as-same machine; that's what it means to know what it does, namely that given this, it does that. So categorizing-as-same would seem to be not just widespread and but perhaps the fundamental idea here, in this world where stuff both does and does not happen. That's about all we need for a starting point.

But no, sadly, for same and different are themselves two sides of a coin. When you say same, you mean not different, and when you say different, you mean not same, and to apply either concept in your thinking you also imply and require, co-present in your thought, the other concept. This is what linguists call a "contrast", or more generally what we call structuralist thinking, which is, as structuralists think, all thinking; and indeed, as structuralists, we think all thinkers are structuralist thinkers.

(Once in, you can't get out. But it's a wierd thing to contemplate, since we live in complex webs of category, and we don't always see the many contrasts we apply in the quick-and-easy thoughts we have about our complex worlds. Myself, I can hardly do it, because I think a thing is itself, not itself-in-contrast-to-others. I stare at the thing, and there it is, not all the other stuff that it isn't, so I'm thinking the world is stuff, not contrasts of stuff with different stuff. Yet all the stuff has its utility, functionality, graspability, even its texture and color and unique position and count, in a space of contrasts with other utilities, functionalities, textures, colors, positions, etc., and the only way we can talk about it and use it is within the conceptual web of these contrasts. So after 7 years in Philadelphia doing a whole PhD in linguistics, which is the field that invented the concept of structuralism, I'm in. Come on in, the water's warm!)

That's the point here, really; it's under the whole enterprise, and we build the whole enterprise out of it. We talk about categories and things and processes of various types but we really mean differences that make a difference, and those come in pairs, like same and different.

So I admit my failure: I submit. For full ontological generality, I would have liked to derive the idea of a difference that makes a difference -- which is the fundamental structuralist thought, from which all others derive -- from the idea of difference itself. But difference is itself a structuralist concept, that itself relies upon a certain ultra-generic difference, the difference between same and different, which makes a difference in our thinking about categories. So it isn't ALONE at the bottom of the hierarchy of ideas. Because if we can't have BOTH in AND out of the category, then we are already out of luck. To have both of those, we need both the idea of same, which is the relationship between observations that are together both in or both out of the category, and the relationship of difference, which is the relationship between one observation that is in and another that is out, of the category, and which we generalize to the relationship between the one side and the other, of the two categories we use to label this difference that makes a difference.

Are we getting close here? Perhaps I could, finally, claim that the fundamental idea is that of "category", but that presumes all these ideas of sameness and difference and reliability. which is approximately the same as sameness and difference, except reliably so, which is what we are demanding of our conceptual universe here.

So maybe category is the basic idea. Every system in science can be considered a categorization engine, categorizing its inputs into triggering and not-triggering, so suppose let's say that the basic idea is the category concept. Oh but no! In any given case, underneath that, actually, is whatever system we are thinking about, and how it works and what it does and what it requires as inputs, all that is a more detailed level of understanding of the system, which we ought to know before we say we understand it at all, much less as a categorization system for accepting and rejecting inputs as seen in whether it triggers and carries out whatever it does. So we can hardly think of any actual practical category without already having a lot of knowledge or conceptual machinery in place.

But I must admit that reliability is underneath even that. Without reliable outcome registration, we are smoking dope. None of those underlying details can be seen or understood without reliable observation of them and what they do. It is a miracle of science that outcomes happen to be observed reliably, which is to say, as sometimes the same and sometimes different.

Oh dear, we find ourselves in a circle again!

Okay, Uncle, I give up, and therefore I will say we need this entire little circle of mutually contrasting dependencies and a relationship with some kind of presumeable reality which gives traction.

The Four, Simultaneously, Mutually-Defined Axioms or Elements of Contrast are: "observation", "reliable", "same" and "different".

Each of the four elements are defined in terms of the others, so we need them all. Observations are what reliably call something same or different. Reliability is calling the same kind of observations as same and the different ones as different (shall I circularly say, reliably?, or only slightly less circularly, repeatedly the same way?) Same is defined as not different, except that furthermore we require it to be revealed as not-different in actual observations, and had better be reliably so or we will ignore it. And different is defined as not same, except that that also is revealed in observations which if not reliable don't justify us trustworthily calling it different. Yup, we need them all as our axiomatic, conceptual foundation.

But now we can start building. Finally!

Definition: when we find something that fits this circle and has this traction, when we encounter the four elements, then we impose the terminology of "contrast" or "category" thereonto.

We call it categorization, when we reliably observe sames and differents, whatever it is we are observing. We pick, perhaps arbitrarily, a reference exemplar of our observations and call it In the category, then all the ones that are the same as it are In the category, and all the ones that are different are Out of the category. We could (0) label them without a memorable or distinctive category label but just as In and Out, or (1) use one category label and say the sames are Category and others are Not-Category, or (2) use two category labels which are defined each automatically in terms of the other (in which case we call it two categories, even though it's only one, because it's a single difference one thing, one conceptual thing, in linguistics we call it a Contrast (unless it makes no difference), and it is the basic basic thing, or finally (n) use any number of synonymous labels for either of the two categories-in-opposition, including Not X for its opposite. This is all just labelling, and adds no information to the knowledge we actually have, which is that we can reliably compare stuff to some kind of reference and thereby observe sames and differents.

Second, we call it counting, when we keep track of the outcomes of our reliable observations, and as we repeat this we put them into little piles, which we call numbers.

Third, we can make multiple categorizations at once. From reliable observations of same and different done in two ways, we can build a four-cell table. Here:

In-InIn-Out
Out-InOut-Out

With multiple categorization, we have two differences, each difference is represented as the difference between two columns, or between two rows, in our four-cell table, and we have both a pair of rows, representing one difference, and a pair of columns, representing the other difference. And if we keep track of multiply-categorized observations by counting, then we might empirically find that the two rows have roughly the same proportions, and that the two columns have roughly the same proportions, in the situation that neither difference makes any difference to the other difference.

We could turn this into full-blown math and statistics, which is what the great R. A. Fisher did in coming up with Fisher's Exact test, and which Tom Veatch implemented for your ease of use here-or-anywhere, and discussed at length, in his (my) one-page web app TeaLady.

You already got the idea.

If you want to go further, think about proving the opposite. Start by assuming a given difference makes no difference. Then see how very improbable are all your counts if the difference really makes no difference. Fisher takes this and says, the same proportionality between counts in matching rows or matching columns, is what you would expect if the difference makes no difference. With this, he defined the term, the Null Hypothesis. Because after all, if there is no difference between columns, or between rows, then the proportions should be the same, at least in underlying probability. Then, assuming the Null Hypothesis, the row and column proportions are the current best estimates of the probability of any new observation being in one vs another. Just sum up on the margins of the table and use column or row totals, versus the whole-table sum, as your estimate of the probability of being in that column or row versus the other.

Thus calculating the probability of any column or row (or cell), you can combine the the possibilities using the counting rules of combinatorics (or and let the computer calculate) to get the total probability of your more-or-less skewed, actually-observed, distribution of counts in the four cell table.

The observed counts might occur with extremely low probability under the Null Hypothesis if the difference actually makes a difference, so that the column proportions were different in the different rows (or vice versa).

TeaLady calculates it for you with those assumptions. For an exercise, just click on it until you get 0, 4, 4, 0, in the four cells, and you'll see that 8 observations with zero counts on the diagonal can give you a probability under 2% that the two rows are equal (so an observation being in either row has probability 1/2). Since there are zeroes on the diagonal, the rows are not equal, which implies the Null Hypothesis is either wrong, or right but at a likelihood of <2%.

It's a great idea to learn the math of Fisher's Exact Test, so please read Tea Lady, read Wikipedia, and read your favorite statistics book.

And please accept my personal encouragement to venture forth and observe the world using TeaLady. Just watch and click, making two simultaneous categorizations of things that you see, maybe with a hypothesis that, for example, men wear masks less than women during Covid, or whatever you like (I did that once, and found it to be true with P<0.01, but not vastly out of proportion; it took 200 observations before it got to that level of significance.

Also, 5% seems a little weak to me, I would say a 5% result only justifies further work to see if it turns out to be replicable, or more data collection to see if adding more doesn't destroy the result, which after all could occur in random data 5% of the time. If the cost of data is low, then get thee a 2% result, or punt.

Now: Fisher Exact is nothing but the structuralist method applied to four cell table counts. We saw that 0 4 4 0 is enough for statistical significance below 2%.

Well, in linguistics, we like to think we have zeroes on the diagonal, and that we could easily put millions of instances on the opposite non-zero diagonal, if we wanted. In practice, we just consider whether we could, and if we think so, we stop there. Linguists are maybe naïve, easily persuaded, self-deceived that their differences could populate a four cell table with enormous numbers but zeroes on the diagonal. But the extreme and universal, if not perfect, conformity of language users who have to after all learn the same language and speak so similarly that they can understand each other, means that maybe 100 million native speakers know mass versus count nouns, and how to build prepositional phrases (i.e., with prepositions, and not postpositions), and that suffix -s marks singular verbs and plural nouns. Any one native speaker has a ton of knowledge in their single little head that is shared with the whole speech community, so you can by default take their intuitions about whether they do some thing, or not, as representative. If you want to demonstrate your credentials as especially empiricist, you can make recordings, and transcribe them, and count up examples, but that's pesky work when you're a theoretician, so most don't. They like to move fast. On the other hand, my own thesis did, but it helped that I learned how to leverage computer-assisted labwork to make lots of measurements quickly. We all like to move quickly. And, if challenged, a theoretical linguist can come up with a ton of data to support any given assertion. Theoretically. We are so confident about it, that we mostly skip the counting step, and move on after giving an example or two.

Next, from Fisher and the noisy tables, to linguistics and the zero-diagonal tables, we now proceed to the general conceptual case, which might be more or less noisy.

Our minds like to think of things and processes, nouns and verbs, so we classify categories as categories of things or of processes, elements and predicates, values and functions, but our epistemology is for the moment blind to this thing/process difference. That will emerge later, maybe. For now, let us be happy we can invent and empirically justify any old kind of category.

Category

A category is a kind of definition which gives a pair of labels to a real process or phenomenon. Categories exist both in reality and in conceptuality. We think of the latter, but the former is even more solid.

Whereas in Electrical Engineering, a "system" is defined as an input/output mapping, or a thing that takes an input to an output; in physical reality a "process" is a thing which has some limited range of triggering inputs and which produces or leads to a consistent output (similar to a system but with a classifier in front of it).

A process can therefore be thought of as a category which lives in the space of actual reality. It classifies all the inputs that might come to it into two groups, those which trigger it and those which do not. Whatever you may call it in human words, the thing constitutes a categorizing system. As it can and does categorize its range of inputs, I'd like to call it a category.

Now humans and other thinkers are able to respond to a range of inputs in discrete ways; we classify them into different categories just like a physical process does by reacting differently to what's in versus what's out of the given category, but also by *thinking* of the input as belonging or not belonging to a conceptual category. The latter is a way of reacting differently, just like a physical process, so I'm calling this all the same, with the same label, namely "category".

A Category is quite like a Definition but instead of defining one expression like a label as (equivalent to) another expression, in human language the definition of a category is the perhaps physical, perhaps conceptual process that distinguishes what is in it from what is out of it. It's like a definition of a set by a process or rule (generally, a rule followed by a process) rather than an enumeration (which is not actually different, since a process could carry out a mere enumeration). Any arbitrary set S, then, is:

S = s s triggers the process

Now categories can be real and correct, or real but skewed, or simple bullshit. Reality, the space of what is actually out there and real, which we believe is there (by faith; by induction; by iterated success over in some cases evolutionary time), has actual processes in it which are consistent, repeatable, and, how shall I say it, interesting. We here have various ideas, and as scientists we want to connect our ideas to reality. We could use the overwhelmingly persuasive structuralist method, or a small-dataset-bound statistical method like Fisher's Exact Test, to discover in our scientific-target reality that some difference is real, that it is a difference that makes a difference.

Our scientific method links this kind of claim (that a certain category exists) to observable reality using either the statistics collected or the overwhelming vastness of linguistic data to bypass the statistics and just assert that the difference in reality makes a difference. When you have a difference that makes an actual difference, you have a category.

Every category has two sides, what is in the category, and what isn't. So implicitly any single category provides two labels, for which we might or might not use P and Not P, instead we might say P and Q while implicitly understanding or defining each as Not the other, whichever one you start with. It is the undergraduate, non-major, linguistics student that thinks category labels comes in units of one; they actually come in units of two, which you realize and learn when you try to discover any new category in language or any other reality. To establish a category, you have to find a difference that makes a difference, and then you suddenly have two categories, each the mirror of the other.

So you might label the difference that makes a difference, or define it, as "in" vs. "out" or as the opposite, out vs. in, it doesn't matter. When you put any given label out there on one side, then "not"-that-label applies to the other side, and then you have a cognitively useable system that connects with reality.

Tadaa!!

By whichever method, you have proven it is not simple bullshit, but something real. The label "magic", for example, might be expected to fail this test.

Still a real category might be skew, might be incorrect. In linguistics we like four celled tables with zeroes on the diagonal, then there are no exceptions and your category is tidy, trustworthy, believeable, and safe. If you can actually think of the whole range of all possible inputs, and your conceptual labelling process captures the applies/doesn't-apply categorization process perfectly, then you're done; your category is real and correct, and given the arbitrariness of labels, you can label it anything you like. But if there are exceptions, things that your labelling doesn't put in the right box, you might have a statistically sound result but it might be skewed more or less away from the real or true underlying category.

For example, you could define "up" as within the geometric half-space defined by the point at a ladder's base and the normal vector of its climbing centerline (that is, from that base, a point is "up" if it is within 90 degrees from the climbing vector), and get a statistically impressive finding that your category of "up" correctly separates the air from the ground, mostly, but still you get a lot of dirt in your "up" category and a lot of air in your "not-up" category. That would be a real but skewed category.

When you shuffle around and finally settle on the right definition for your category, when you correctly understand the process of differentiating what's in it from what's not, in this case up from down, then (in a flat place, with ladder perpendicular) all the air will be above all the dirt, with any exceptions having a separate story, and then you've got a better category, you've moved from a conceptual category that was real in some sense but skewed, to one which is real and more correct. Then we are happy.

That's the empirical aspect of the job of the (cognitive) scientist: connecting conceptual categories with reality. The logical aspect takes the category concept and draws all kinds of conclusions which are mere tautological re-expressions (according to me) of the original category. "Not up" is "down" (definition). "Not down" is "up" (tautology). Mines can go deep (down). H. Etc. The space of expression is wide, certainly countably infinite, for any single category in this world.

Observation

An observation is, let's say, an observed actual event reduced to a linguistic description or characterization of that event. Observations transmute the rich sensory reality experienced by a perceiving observer into one or another limited classification, perhaps in multiple categories. Observation becomes data in the abstract, but it is tied to reality through the sensory and classification capabilities of the observer. Here's what's new. Statements about observations might be true or false. In a world of tautological re-expression of definitions and undeniable axioms, nothing can be false. But observational statements might be true or false. We hope we can trust an observer, for the time being we assume the observation is true as stated. But they could have lied, and it could in principle have been otherwise: the event observed might have occurred differently for any reason in or out of the screen of our knowledge or recording. Reality being the mystery that it is, observations could be wrong. In the world of observation is the world of empirical truth and falsity.

Tautology

But there is another kind of truth, which is the undeniable, non-empirical, truth of tautology. When a tautology is asserted, we say, Of course, it cannot be otherwise. Because P = P no matter what P is, we are persuaded. If it is a skewed or unreal category, your reasoning is garbage in garbage out. But if the categories are real and correct, then the logical process of tautological reasoning keeps things solid. Why? Because tautology doesn't say anything. It adds nothing, so it doesn't introduce any possible errors either. It is hard, solid; it is logic, the Queen. It rules.

For a child who doesn't have conservation of number, 8 is not 8, but sometimes it is 12, just like for Dad's dog, treats are numbered (and complained about) as 0, >0, where if Dad's other dog has 3 and it only has 2, there is no complaint, but as soon as Ralph has 0 and Princess has 1 (or more), it's time for a fuss. So we constrain ourselves to the knowledge (capable of being known) by the particular knower, within their capacities of knowledge, and we still say that P = P, once we have the right category set (where 8 for the child without conservation of number is replaced by >3, or whatever they are able to correctly reason with). In this way, definitions for those trying to understand us must be within the cognitive capacities of the audience.

Modulo this, tautology is indeed always true. It is not exactly empirical truth, because it doesn't come from particular observation. But it has the solidity, the satisfaction of trustworthy persuasion, when we see that some communicated essential idea is just a different way of saying the same thing. A complex argument becomes persuasive if its steps have the force of tautology, the force of obviousness.

07/04/2022:

A proposition is a statement about the world which can be true or false.

A tautology cannot be false but may or may not usefully fit, simplify, or give useful perspective on the world.

We take the real world to be real, that is, it is what it is, if only probabilistically, and it is self-consistent.

Statements about the world can be true or false, and possibly unknown or undecidable, but if true are not self contradictory.

Statements about statements can be self contradictory, they are unanchored. Hence Russell's Paradox.

Math from Logic

The reason Math seems different from Logic (that is, that it's not actually pure obviousness) is proprietary information hiding. Everyone needs economic security, so they hide the information that makes their work utterly obvious. As lawyers, doctors, as plumbers, so mathematicians. Wikipedia reports:

Abel said famously of Carl Friedrich Gauss's writing style, "He is like the fox, who effaces his tracks in the sand with his tail." Gauss replied to him by saying, "No self-respecting architect leaves the scaffolding in place after completing his building."[15]

But we (because you are on my team) take no BS, certainly not this BS. We don't believe anything we don't have to, and just because some smartypants hides their reasoning doesn't mean they actually own the knowledge they are keeping secret, or that they didn't get there by step-by-step steps which were all undeniable obviousnesses essentially tautological in nature.

Elsewhere I argue that spatial proprioception based on signal shaping enables integration of spatial information sources through signal addition.

Here I argue that "Usefully Having Space" is equivalent to "Having Logic", for not-even-very-intelligent organisms and other studied information-processing systems.

Once you have any decent functional analog of space inside of an organism's representations, it already has the capabilities required to reason with hard incontrovertible logic, not to exclude geometry, numbers, set theory, and the rest of math.

Encounter a fence, or a river, or any boundary in space, is it not fully real to you when you see it in your mind's eye? Do you have to infer from the cow on this side that it is not on the other side? Or is it immediately visible? Whenever you have the ability to represent space, then you can observe boundaries that occur in space, and you can reason about stuff relative to boundaries by simply looking at what's where in (your representation of) space.

If A is to the right of B and B is to the right of C because you can see them that way, then is A to the right of C? (1) Yes, because you can see them that way too. (2) Yes, it is tautological. And (3) after humans evolved Frege, you can also symbolize these relationships and declare their transitivity, and make logic seem to be a symbolic activity, when it is actually derivative of a lower-level knowledge representation, of the perception of space.

8/18/2023. Do you need the concepts of left and right for logic to derive from spatiality? No. For an organism to have space, to be provided with spatial representation capability, means minimally that stuff can be differentially responded to based on its different spatial encoding. A frog mind that has space is able to hit rather than miss its flying meal. Without space, it cannot differentially respond to stuff here vs there. Space can be quite minimal; Veatch 1991 proposes axioms of Time and Space for phonological computation in the human language system; these do not rise to the level of cartesian coordinate systems, infinity, infinitesimality, or the usual things we think of in the mathematics of space. But at the very least, the organism must be able to respond differently due to spatial encodings, otherwise it wouldn't be space at all.

My argument is that because spatial perception comes early in cognitive evolution, perhaps before the worm, logic also comes early in cognitive evolution and that it is no evolutionary miracle of some late-invented symbolic-processing organ. Once organisms evolved an internal representation for space, they can read the logic off the spatial representation of things, as Mr. Venn taught us, and their very perception of things (in space) follows and forces logic upon them.

Look at a fence, do you need anything more to think about logic, when you have things inside it and outside it? Is the (2d abstract spatial layout of a) Venn diagram a merely informal tool of some deeper logical reasoning, or is that abstract spatiality actually the foundation of organismal logical thinking? The latter; that's my point.

From a mere fence in space we certainly get categorization, as in Inside the fence versus Outside the fence, In-the-Category and Out-of-the-Category. And then we are off to the races, because we can build all kinds of reality-connected categories using Fisher's Exact Statistic where necessary, or the Structuralist method where possible, checking that each difference makes a difference in reliable observations. Then we can build up a whole world of categories, a vast hierarchical network of all knowledge, piled together out of this organismal urge to learn.

Did I promise you geometry, too? Yes. Let's just use Euclid's first axiom, two points make a line. If you are a visual organism then one of your eyes makes a point, and anything you look at makes another point, and your evolved spatial representation ability lets you realize that some other thing could obstruct your view of the first by coming in the way, and that the first thing is obstructing things behind it. This is hardly a matter of abstract reasoning but more of low-level perception, and it carries Euclid's two points and what is important about lines, namely it being defined by the first two points and also having other points in order with them.

By a similar argument the axioms of geometry, whether Euclid or Hilbert's fall out as biological facts deriving from spatial representation within evolved organisms. Pick one and ask me if you remain skeptical.

I promised numbers, set theory, and the rest of math, too. I'll just give you sets and numbers, and you can do the rest.

Starting with sets: we see on the hierarchy of knowledge they occur first in any kind of (that is, every) system that operates, since it categorizes its input circumstances into those that do and don't trigger its operation, which operationally defines a set, though organisms may not have some separate representational form for any given one, still it is operationally a set.

Sets occur secondly in representational organisms in the form of their ability to learn and use differences that make a difference, because that ability in any instance also operationally defines a set (two, actually, the ins and also the outs; certainly we also have one set when there are two).

When an organism can internally conceptually represent a collection without a theme or categorization process, then it has also evolved to the point of being able to reason with sets as enumerated collections. That would seem to be a late evolved ability.

Numbers, we can skip; we are already done; I refer you to Wikipedia: "[P]roperties of the natural and real numbers can be derived within set theory".

Can I bring this full circle to tautology once again?

My four-subelement axiomatic base is the idea of a real contrast, namely (1) reliable (2) observation of a (3) difference that (4) makes a difference. A real contrast justifies using it as a category, or equivalently in our dialog, calling it real.

Given this methodology that spits out contrasts or categories, we can study anything and come to understand what the elements and relations, stuff and spaces and things and processes actually are, in any domain. That makes us empirical scientists, or you might say Aristotelians.

But with any given category we are free to give it a label, and hence we can also apply tautology, and produce reasoning pathways of unarguable obviousness, which we call logic. Let C be a category, then C → C, ¬C → not C, and ¬ ¬ C → C; etc. Away you go, the Queen of the Universe, Logic herself, is now your slave, and you can go anywhere -- subject to her limitations. And the pure abstraction of these logical games that you can play with any category, even unreal categories, giving you the power of logical derivation from assumptions to conclusions, irrespective of observation or reality, this power makes us also Platonists, whose world of perfect forms lives outside the actual world.

Conclusion

I've been trying to emphasize the value and centrality of the simple insight. A simple insight comes from a useful perspective that captures a true, perhaps general essence, out of what might be a complicated forest of observations and claims, along with axioms, definitions, and intermediate steps in the proof. It's the simplicity in the end that gives the power of thinking. And this is largely, highly, mostly achieved by steps that are tautological. You get to skip forward once you see how a step makes sense, and making sense usually means that it is equivalent to some previous step or steps, or to an understanding that you have already come to in considering the domain. Following an argument is a process of recognition of the previously known at each step, and of dismissal of the novelty and unknownness of each step, one after the other, until everything falls, and you discover you know what you thought you didn't know. This is the way we learn what we can really trust to be true. The tautologies including axioms, definitions and restatements of previous steps in the chain of reasoning, along with the observations that you trust the truth of, bring you to persuaded understanding. And in the end, if it is a useful argument, reuseable later, it will conclude with a simple insight that you can hold, carry, and apply with your limited mental capacities: something that now you can use. Stand then on the shoulders of giants, and see! That's where we want to be.

What is science? Is it having statistical support for an assertion? It is a well-designed experiment? Yes, those, true, but science is reliable, useful, that is, trustworthy knowledge: true insight. What we want is the true insight. Not the P value, or the N, or the particular statistical distribution or model or test. And it's a combination of observation and logic, the true categories and what they support, namely their tautology-hard combination in axioms, definitions, rules of inference, and chains of reasoning, that bring us trustworthy insight, truth.

So I conclude by pointing out that in all kinds of areas of human knowledge, this applies. Logic is not a mere isolated and boring technical field, but a universal and encompassing system. Logic is what we use in cognitive science to prove insights which other sciences are too timid and logically unequipped to assess. If observation can be brought to bear within a system of logic on a question, more can be known than we might have thought.

I say so again in the other essays in this sequence of essays. I apply this aggressively to evolution, to biology, to the development of intelligence and language in humans, to the hierarchy of all knowledge. In a surprising number of cases, the truth is inevitable because it is tautology-hard, and what seems to be unknown or mere speculation, because one kind of direct observation is not available, still conclusions can be arrived at which are strong, reliable conclusions, using reasoning which is unquestionable, undeniable, where simple trustworthy insight is obtained by just taking the right perspective on what we do know, and following what is obvious to its necessary conclusions which might be far from obvious.

Mohammad, have I set you on the mountain? Have I opened vistas to you? I hope I have pulled you from boredom and technical mental strangulation in the Symbolic Logic world and given you both power and rest, given you a path of true insight.

So empowered, get yourself a biscuit and tea, or a cup of coffee, to think on it, and read on, I invite you!


Your thoughts?
(will not be shared or abused)
Comment:
                                          Feedback is welcome.

Copyright © 2020-2022, Thomas C. Veatch. All rights reserved.
Modified: September 22, 2020, January 23, 2021; January 30, 2022