## Nik’s Co-Substitution (AKA Understanding Critical Pair Lemmas)

Nik’s Co-Substitution (AKA Understanding Critical Pair Lemmas)

Matches[expr,patt]

Assume expr contains pattern variables e1_, e2_, ... and subexprs sube1, sube2,...

patt contains p1_, p2_, ... and subexprs subp1, subp2, ...

patt contains p1_, p2_, ... and subexprs subp1, subp2, ...

#### Ordinary matching

Ordinary matching

p1_->sube1

In ordinary substitution, it only uses this match in the rhs of the substitution

#### Comatching

Comatching

e1_->subp1

In more ornate inference rules, this match (which is just an instantiation of blanks) is applied throughout expr ; then you use a substitution rule

### I.e. this is preprocessing, renaming blanks

I.e. this is preprocessing, renaming blanks

#### Here comatching and matching lead to the same result:

Here comatching and matching lead to the same result:

expr:f[g[x_],g[x_]]

patt:g[y_]

rule:g[y_]->h[y]

For ordinary substitution, we just match up y_ -> x_ and then locally apply a rule

f[g[x_],h[x_]]

For comatching, x_ -> y_

f[g[y_],g[y_]]

Then we use the pattern again

#### Need an example where comatching and matching lead to different results

Need an example where comatching and matching lead to different results

expr:f[g[x_]]

patt:g[a]

rule:g[a]->rhs

Ordinary pattern does not match

But if we replace x_ -> a then it matches

f[g[a]]

## Actual Cases

Actual Cases

In[]:=

substitutionLemmas[x_[x_[x_]],x_[x_]->x_]

Out[]=

{{1},1}x_[x_]

In[]:=

coSubstitutionLemmas[x_[x_[x_]],x_[x_]->x_]

Out[]=

{{0},1}x[x_[x_][x_[x_]]],{{1,0},1}x_[x_][x[x_[x_]]],{{1,1},1}x_[x_][x_[x_][x]],{{1},1}x_[x]

In[]:=

substitutionLemmas[f[x_],f[x_]->g[x_]]

Out[]=

{{},1}g[x_]

In[]:=

substitutionLemmas[f[x_],f[y_]->g[y_]]

Out[]=

{{},1}g[x_]

In[]:=

coSubstitutionLemmas[f[x_],f[y_]->g[y_]]

Out[]=

{{1},1}f[g[y$47383_]],{{},1}g[y$47383_]

In[]:=

coSubstitutionLemmas[f[x_],h[y_]->g[y_]]

Out[]=

{{1},1}f[g[y$47385_]]

x_->f[y_]

We can choose to say that x_ is f[y_] just because it is an arbitrary pattern anyway, but having done that we have created structure that can be matched by the substitution rule

Since x_ can stand for anything, an instance of “anything” is h[y_]

f[h[y_]]

and now our substitution structurally applies....

### In terms of trees

In terms of trees

In[]:=

Tree[f,{x_}]

Out[]=

In[]:=

Tree[f,{Tree[h,{y_}]}]

Out[]=

#### What goes beyond substitution is that the special-casing is applied globally to an expression

What goes beyond substitution is that the special-casing is applied globally to an expression

In[]:=

With[{g=NestGraph[Values[substitutionLemmas[#,x_x_[x_]]]&,{x_x_[x_]},2,GraphLayout"LayeredDigraphEmbedding",AspectRatio1/2]},Graph[g,VertexLabels(#->(ResourceFunction["CombinatorExpressionGraph"][#,"ShowVertexLabels"->False,ImageSize->20]&/@(#/.Verbatim[x_]->o))&/@VertexList[g])]]

Out[]=

In[]:=

FullForm[x_]

Out[]//FullForm=

Pattern[x,Blank[]]

In[]:=

With[{g=NestGraph[Values[coSubstitutionLemmas[#,x_x_[x_]]]&,{x_x_[x_]},2,GraphLayout"LayeredDigraphEmbedding",AspectRatio1/2]},Graph[g,VertexLabels(#->(ResourceFunction["CombinatorExpressionGraph"][#,"ShowVertexLabels"->False,ImageSize->20]&/@(#/._Pattern->o))&/@VertexList[g])]]

Out[]=

Cannot merge without variable canonicalization....

Co-substitution is a generalization of substitution....

# Building Everything from Combinators (Etc)

Building Everything from Combinators (Etc)

We want to build the “true statements” multiway graph of everything that is true about combinators....

Somewhere in that graph is what we can interpret as 2+2=4

Proof:

#### Workflow: start purely with the combinator relations .... then follow LoIs

Workflow: start purely with the combinator relations .... then follow LoIs

#### Then we get equivalences that we can identify as being mathematics (e.g. 2+2 = 4)

Then we get equivalences that we can identify as being mathematics (e.g. 2+2 = 4)

E.g. we should be able to get a proof that 2+2=4 in terms of the s, k axioms

In fact, that proof (in this case) is just the computation

In fact, that proof (in this case) is just the computation

#### Starting with the S, K rules, we can derive relations for any axiomatic system

Starting with the S, K rules, we can derive relations for any axiomatic system

What gives this content (like the ruliad) is that given a particular identification of meanings, we can derive all statements that involve those subtrees.

Meanings / operators certain subtrees

[We can “observe” certain combinator relations, by reducing them to “understandable” form ... ]

[We can “observe” certain combinator relations, by reducing them to “understandable” form ... ]

E.g.

is

Then we should also generate:

### There will be certain definable things that survive “proof evolution”

There will be certain definable things that survive “proof evolution”

#### For observers like us, we need identical object to re-occur

For observers like us, we need identical object to re-occur

It would be OK to get different objects if we were non-persistent observers

[If we are to be consistent/persistent observers, with bounded model-finding/computation capabilities, we need consistency in our decoder function that identifies Plus from the soup of combinators]

[If we are to be consistent/persistent observers, with bounded model-finding/computation capabilities, we need consistency in our decoder function that identifies Plus from the soup of combinators]

#### To establish that a particular combinator lump “is” Plus, show the Peano axioms for it ..... though this isn’t quite enough

To establish that a particular combinator lump “is” Plus, show the Peano axioms for it ..... though this isn’t quite enough

The theorems of all possible non-standard arithmetics will also show up in the combinator soup

### High-level mathematics is able to make (“reducible”) jumps between recognizable constructs in the ocean of statements

High-level mathematics is able to make (“reducible”) jumps between recognizable constructs in the ocean of statements

#### Recognizable statements are like tracer particles in fluid mechanics

Recognizable statements are like tracer particles in fluid mechanics

You can describe their motion in terms of fluid behavior, without going down to the molecular level

### Two cases:

(1) explicitly introduce new “symbols” (e.g. Plus) by having statements that reference them; or

(2) have these things “arise naturally” and then identify certain lumps as “Plus”

Two cases:

(1) explicitly introduce new “symbols” (e.g. Plus) by having statements that reference them; or

(2) have these things “arise naturally” and then identify certain lumps as “Plus”

(1) explicitly introduce new “symbols” (e.g. Plus) by having statements that reference them; or

(2) have these things “arise naturally” and then identify certain lumps as “Plus”

### Basic claim of the possibility of high-level mathematics: we can jump from high-level construct to another, without having to go down to the machine code

Basic claim of the possibility of high-level mathematics: we can jump from high-level construct to another, without having to go down to the machine code

[With enough high-level jumps, even a small error can eventually derail you...]

### The fact that we can define standard math uniformly in terms of something computational ... is what ∃ WL proves

The fact that we can define standard math uniformly in terms of something computational ... is what ∃ WL proves

#### The “lumps” that represent identifiable math ⟺ NKS study of possible axiom systems

The “lumps” that represent identifiable math ⟺ NKS study of possible axiom systems

### The Pythagorean theorem at machine level is really the Pythagorean “theorem cloud”

The Pythagorean theorem at machine level is really the Pythagorean “theorem cloud”

Because humans don’t care about e.g. the precise axiomatic structure of the reals in defining the Pythagorean theorem

### Quantum mathematics: do you have to go below the high-level version?

Quantum mathematics: do you have to go below the high-level version?

### The WL evaluator is a very thin set inside the ruliad...

The WL evaluator is a very thin set inside the ruliad...