In[]:=
PacletInstall[CloudObject["https://wolfr.am/10Vhe8A43"],ForceVersionInstall->True]
Out[]=
PacletObject
Name: Wolfram/Metamathematics
Version: 1.0.0

In[]:=
<<Wolfram`Metamathematics`

“Rotate” the proof object graph

Display “what got transformed” as part of the event

In TEG for metamathematics ... the tokens are complete math statements

Vs. WPP: tokens are terms...
In[]:=
FindEquationalProof[a∘(b∘(c∘d))e∘d,ForAll[{a,b},a==b∘a]]
Out[]=
ProofObject
Logic: EquationalLogic
Steps: 6
Theorem: a∘(b∘(c∘d))e∘d

In[]:=
ProofObjectToTokenEventGraph[%]
Out[]=
In[]:=
EdgeList[%64]
Out[]=
{(e∘da∘(b∘(c∘d))){Event,Substitution Lemma 1},(ab∘a){Event,Substitution Lemma 1},{Event,Substitution Lemma 1}(e∘da∘(b∘d)),(ab∘a){Event,Substitution Lemma 2},(e∘da∘(b∘d)){Event,Substitution Lemma 2},{Event,Substitution Lemma 2}(da∘(b∘d)),(ab∘a){Event,Substitution Lemma 3},(da∘(b∘d)){Event,Substitution Lemma 3},{Event,Substitution Lemma 3}(da∘d),(ab∘a){Event,Conclusion 1},(da∘d){Event,Conclusion 1},{Event,Conclusion 1}(dd)}
In[]:=
If[First[#]=!=(ab∘a),Reverse[#],#]&/@EdgeList[%64]
Out[]=
{{Event,Substitution Lemma 1}(e∘da∘(b∘(c∘d))),(ab∘a){Event,Substitution Lemma 1},(e∘da∘(b∘d)){Event,Substitution Lemma 1},(ab∘a){Event,Substitution Lemma 2},{Event,Substitution Lemma 2}(e∘da∘(b∘d)),(da∘(b∘d)){Event,Substitution Lemma 2},(ab∘a){Event,Substitution Lemma 3},{Event,Substitution Lemma 3}(da∘(b∘d)),(da∘d){Event,Substitution Lemma 3},(ab∘a){Event,Conclusion 1},{Event,Conclusion 1}(da∘d),(dd){Event,Conclusion 1}}
In[]:=
Graph[%,Options[%64]]
Out[]=
In[]:=
VertexDelete[%70,{{"Event","Conclusion 1"}}]
Out[]=
With[{g=AccumulativeTokenEventGraph[{a_<->b_∘a_},2,"S","TokenLabeling"->True]},Subgraph[g,#]&/@findProof[g,{a_<->b_∘a_},a_∘(b_∘(c_∘d_))<->e_∘d_,1]]

The non-accumulative TEG is just like the path-based derivation system ... except that events can be fed by different axioms (if there are multiple axioms)

Cut elimination [ cf inlining proofs inside proofs ]

FEP finds a path to get to a given statement from the “big bang”

Its strategy has to do with making forward and backward entailment cones [?]
With a certain pruning [or is it ordering?]
E.g. Knuth-Bendix with a certain ordering only combines certain statements