Holistic Mathematics: Experiments, Computation, and Proof: Vol 2
Holistic Mathematics: Experiments, Computation, and Proof: Vol 2
John Erickson, PhD
“I don’t want to belong to any club that would have me as a member.” -Groucho Marx
“A recent (published) paper had near the beginning the passage `The object of this paper is to prove (something very important).’ It transpired with great difficulty, and not till near the end, that the `object’ was an unachieved one.” -Littlewood’s Miscellany
Prologue to all 4 volumes (abridged)
Prologue to all 4 volumes (abridged)
This monograph is a bit weird, but maybe it’s time for a little weird because normal and conventional has clearly not worked out so well when it comes to some of the problems in math education in my opinion.
There are several goals for this four volume monograph. Too many times students have asked me “How do you do research in mathematics?” and I have found that the best way, perhaps the only way, is to just do some research with them. Since that isn’t a scalable solution, my main goal is to illustrate, at a fairly elementary level, one approach to how math research can be done by providing a concrete and comprehensive, albeit vicarious, research experience which goes “from grammar school to graduate school”. I put that in quotes because I thought I made it up, but I’m apparently not the only one who says this. In order to facilitate this goal, I have sort of “let it all hang out” so to speak, nothing gross mind you, I just haven’t removed all the evidence of work, false approaches, incorrect ideas, and shortcomings. This gives the monograph a rough quality, but that is not intended as an excuse for actual mistakes, of which I am sure there are many.
The problem which started our investigation was an essentially randomly chosen elementary recreational discrete math problem. Someone sent me their kids math circle problem. Now, I make no claim to being a great mathematician and, in particular, I am not really all that great at discrete mathematics. Contrary to this being a deficit however, this was actually an advantage for my goals, since an expert would likely know the standard machinery and be done with it. I, on the hand, actually had to think about the problem and create some code to make sure I understood what was going on. After solving the problem I decided I would see what would happen if I just followed the mathematics to see where it led through the standard process of both generalization and specialization. In the end, I would say that the journey wound up being a seemingly random “stop and smell the roses” tour of mathematics. Speaking of the “end”, my stopping criteria, that I end up at the graduate level, was a bit problematic since it took about three years and perhaps I only ended up at the advanced undergraduate beginning graduate student level. If you’re the sort that likes to know where you are going before you begin a journey, the last result, in volume 4, is from computational and theoretical combinatorial group theory. Alexander Hulpke and I added sequence A337015 to the OEIS and proved a nice but simple theorem connecting sequences A005432 and A337015 which makes one ask how else these sequences might be related. One question I find interesting to ask is the following: if I started with the exact same problem today would I end up three years later arriving at the same terminal result or not?
Just to be absolutely clear, there is really nothing original in my approach, which is probably most influenced by Polya, Ross, and even Moore (sans all of his racism) along with various teachers and professors I have had. I would say that it is also heavily influenced by my modest knowledge of experimental mathematics and the use of software to do calculations. I used Mathematica primarily along with the OEIS, but I also used WolframAlpha, the Inverse Symbolic Calculator, and GAP as well. There are many choices for tools and the underlying mathematics does not depend on those choices. Here is an interesting related issue regarding the balance of math and computation. I have noticed that a lot of students who like coding neglect math these days, and would be quite happy if they never saw another equation again. I find this situation unfortunate. On the other hand, I have also been told by well meaning, and very smart math people, that I should remove or otherwise segregate all the code in these volumes. I said to myself, “Yeah, that ain’t ever gonna happen.” Am I in love with my own code and just being stubborn you ask? I don’t think so. Obviously, I think learning to code is important, but the primary reason I have left the code in place is that I am firmly convinced that the tendency to present the slick final product of our mathematical labors in the form of an elegant proof without the motivations, experiments, and even incorrect understandings, is part of the reason so many students turn away from mathematics. They say to themselves, “Well, I could never do that, I better switch majors.” This is particularly true for students who are already insecure, in particular, members of underrepresented groups.
The basic process we will follow throughout the volumes is simple, and is what mathematicians do all the time: play with some mathematical problem, consider examples, run experiments, look at visualizations, notice patterns, make conjectures and try to prove them. Inevitably, you will make new definitions and generalizations and find new patterns, make new conjectures, prove new theorems, etc...Eventually you may find that you are able to make conjectures that you can’t prove. That could be because they are not true, or you simply don’t have the background knowledge needed, and that’s fine, albeit a bit frustrating.
Another goal of this monograph to constructively criticize some trends in math education which I find deeply problematic. Basically, while I believe that lists of best teaching practices can be productively adapted by each individual instructor for their particular students, some of the confidently asserted alleged universal “research driven best practices” for teaching mathematics sound rather dubious to me and perhaps even counter productive. I will at times freely just be giving my opinions based on my own training and experiences as a student, as a high school teacher, as a community college instructor and as a college professor. They are just my opinions, so please feel free to disagree. In particular, I will also occasionally address all the endless talk about the importance of diversity and the reality of “uhh...yeah, not so much”. I finally concluded that it was my duty to address some of these issues, and it was cheaper than paying for therapy. For people who want to read the math or code, but don’t want to read this material, I get it, nobody wants to feel uncomfortable. Therefore, I have helpfully delineated these comments with the word Comment.
Finally, since the primary target audience for this monograph was students, now and in the future, it was important to me that I leave the comments in place and that the book be as cheap as possible. Having said that, faculty that have an interest in teaching or how ideas are arrived at, might find this interesting as well.
Disclaimers & Comments
Disclaimers & Comments
◼
The opinions expressed in this document are mine and mine alone and do not represent the opinions nor constitute an endorsement of/by any other person or entity mentioned in this monograph. If your name was mentioned it was simply to properly acknowledge precedence or give credit.
◼
This is a first iteration. In addition to not having been vetted mathematically, I couldn’t talk my wife into proofreading it once she heard it was four volumes, so the writing is what it is. If she had helped me, she could have removed this statement.
◼
No claim is being made regarding the originality or importance of the results presented in this monograph. Obviously, it would have been nice if we had stumbled onto a slick proof of the Riemann Hypothesis, but that didn’t happen. I’m telling myself that It’s more about the process than the results.
◼
No claim is being made regarding the optimality of the code.
◼
Regarding the poor layout, I never figured out how to use Mathematica stylesheets or the author tools, but I intend to ask Wolfram Research for help with this in a future iteration.
Preface to Vol 2
Preface to Vol 2
We take our function and go into great detail. In particular we investigate the first differences. Then we investigate the partial sums of . Then we investigate the growth of functions with an eye towards thinking about the growth of .
y
y
y
Chapter 8- in detail
y
We will work out many details of the position function and it’s first difference, so it will be a bit heavy on the algebra. This is one part where we are showing the work very much out of sequence because we know we will use the results of this section in some later sections and I found that it broke the flow too much to stick to a chronological order.
y
The Position Function-Part 2
The Position Function-Part 2
Looking back, it’s kind of hard to believe that this all started with the last one standing problem. First we will recap our definition of the position function and then we will discuss the first differences and first partial sums of and some consequences.
y
y
Recap of our definition of y
Recap of our definition of
y
First we figured out that all of our questions came down to an understanding of our initial position function which was . We had two formulas.
posFunc:
n
{0,1}
recursive
|
closed form
posFunc n { a l l=1
β k n { a l l=1 (-1) a k k 2 1+ n ∑ r=1 a r (-1) 2 |
where
|
Then we realized that it was easier to consider a new function which we also called the position function.
y:
|
We take this as our definition of . We could call call the “last one standing” function or sequence, but upon entering it into the OEIS website, we found that this is known as sequence A066194. Following a few links, this turns out to almost be the famous Gray code ordering of the natural numbers and can be obtained from the Gray code ordering by reflecting the binary digits of the Gray code over a vertical line so we could call it the quasi Gray code. Incidentally, we are calling this permutation because we are anticipating the use of ideas from calculus and we want to suggest the so-called “y values” of a graph. This is a decision we have come to regret in hindsight due to the use of as a generic function, but it is too late since too much has been written using .
y
y
y
y
Details regarding the definition of y
Details regarding the definition of
y
Returning to the main thread, the problem with all of these formulations is that they are not directly amenable to algebraic manipulation.
Again, here is our list of positions and our key graph.
In[]:=
Table[y[k],{k,1,32}]
Out[]=
{1,2,4,3,8,7,5,6,16,15,13,14,9,10,12,11,32,31,29,30,25,26,28,27,17,18,20,19,24,23,21,22}
In[]:=
ListPlot[Table[y[k],{k,1,128}],AspectRatio1,ImageSizeSmall]
Out[]=
At this point an interesting issue comes up. It turns out that this formula for is defined for negative values and 0!
y
In[]:=
Table[{k,y[k]},{k,-8,8}]
Out[]=
{{-8,15},{-7,16},{-6,6},{-5,5},{-4,7},{-3,8},{-2,3},{-1,4},{0,2},{1,1},{2,2},{3,4},{4,3},{5,8},{6,7},{7,5},{8,6}}
I found this sufficiently weird when I noticed it that for a long time I used the following definition of y.
y[0] = 0; y[n_ /; n > 0] := posFunc[Reverse[IntegerDigits[n - 1, 2]]]
In other words, I disallowed negative values and declared arbitrarily. This is usually not a good idea in mathematics but for a some time I didn’t notice anything because I simply never considered any non positive values. Notice the following however
y(0)=0
In[]:=
ListLinePlot[Table[{k,y[k+1]},{k,-15,15}],ImageSizeSmall](*ListLinePlotlookscooler*)
Out[]=
Sweet, but a little scary looking! Apparently shifted to the left one is an even function! But should we really be surprised by this? Nope, since
y
|
and IntegerDigits as implemented in the Wolfram language is an even function. It turns out it is possible to develop a closed form formula for by considering the first differences which we turn to now.
y
The first differences of y
The first differences of
y
If we want a more useful formula we will have to work a little harder. Also, the differences comes up in a later section. Here are the differences in the position list. A plot is not particularly illuminating except to show visually that there are many repeated values.
In[]:=
Differences[Table[y[k],{k,1,64}]]
Out[]=
{1,2,-1,5,-1,-2,1,10,-1,-2,1,-5,1,2,-1,21,-1,-2,1,-5,1,2,-1,-10,1,2,-1,5,-1,-2,1,42,-1,-2,1,-5,1,2,-1,-10,1,2,-1,5,-1,-2,1,-21,1,2,-1,5,-1,-2,1,10,-1,-2,1,-5,1,2,-1}
We notice that the difference of our position sequence contains the subsequence and we have almost seen this sequence before. It’s our everybody standing sequence minus 1! Also note that these numbers occur at positions of the form . We can find a formula for this subsequence easily as follows. Note that the 3 in our everybody standing formula has now become a .
{1,2,5,10,21,42,85}
k
2
-3
FindSequenceFunction[{1,2,5,10,21,42,85},n]
Out[]=
1
6
1+n
(-1)
2+n
2
The question that occurs to us now is how might we generate the entire sequence. In any case, if you stare at the list of position differences long enough, you will see that it can be constructed by taking the stage of the sequence, adjoining a term from our subsequence, and then translating a negated version of the stage of the sequence.
th
n
th
n
In[]:=
ManipulateFoldJoin[#1,{#2},-#1]&,{1},Table(-3++),{n,2,m},{m,1,10,1},SaveDefinitionsTrue
1
6
1+n
(-1)
2+n
2
Out[]=
This algorithm makes sense if you think about our reflect, rotate, and translate geometric construction for the position sequence.
If you think about the above code, you realize that it is really implementing a recursive formula for our sequence as follows:
|
Which we can rewrite as follows using the IntegerExponent function.
|
which is also the same as
|
This can immediately be implemented as follows
So we can generate our sequence using a recursive formula but what if we wanted an explicit formula for it? Since we have a formula for the subsequence, we might want to try the following
It appears to work! Of course, now we want to prove this conjecture now. We have been in this situation before: we have a recursive formulation which makes intuitive sense to us and which can therefore serve as a definition and we now want to prove a closed form formula as a theorem.
Theorem: If
then
So we’re done. We have never seen this formula before, and while we consider it unlikely that it is original, we did not find it, or the above formula listed in the OEIS.
Recurrence relation for y
Recurrence relation for y
Recurrence relation for z
Recurrence relation for z
Before proving this result, we need a key lemma
Pf: Consider the following calculation
Our sequence has many patterns. In particular we can see that the terms in positions which are powers of two have a simple formula as follows.
We can easily prove this formula using our lemma.
We can also see that the terms in positions which are one more than a power of two have an even simpler formula as follows.
Now we prove the theorem.
Therefore we have shown the following theorem
and moreover we have
In general it is interesting how often abstract results about functions or maps have implications for equations and inequalities and vice-versa.
A Inequality Result
A Inequality Result
Pf:
Again we give a numerical observation and the corresponding proof.
So we’re done.
We can now put these two lemmas together if we give up on requiring equality as the following calculation shows
This calculation leads us to the
however we will only prove the following, since it is easier and sufficient for our purposes
Pf: By the above two lemmas we have
Therefore we have
and we’re done. Now we can prove the main result.
but the lemma above gives us
ThueMorse, Evil, Odious, and all that....
ThueMorse, Evil, Odious, and all that....
The title of this section is an homage to the classic book title “Div, Grad, Curl, and all that....”
As I have mentioned before, my intention when I began this effort was that I would only discuss topics in the chronological order in which they were discovered in order to maximize the authenticity of exposition. In the end, this purity was not ideal and not even really all that authentic since one goes back and forth revisiting old approaches in light of new ideas. In particular, it now behooves us to discuss openly what we have nibbling around the edges of: The ThueMorse function which gives the parity of the number of 1’s in the binary representation of an integer. We have already informally defined it, but we now do so formally.
Definition:
Let’s compare these implementations with the built in ThueMorse function
This leads to a simple theorem which we don’t need to prove and could just as well have been the definition.
Theorem:
According to Wikipedia, Conway is credited with the terms Odious and Evil. Odious numbers have ThueMorse values of 1 and Evil numbers have ThueMorse values of 0. Here are some evil numbers.
Since these terms are inspired by even and odd, in other words parity, but have a negative connotation we will call this quality pariahity. See what I did there? It’s clever, yes? I have no idea if it is original. Was it really cleverness though? Nope. All I did was the following.
As luck would have it, the first word has negative connotations! It’s kind of fantastic how much knowledge is “at the ready” in the Wolfram Language and it goes way beyond merely having a dictionary available. On a related note, sometimes I wonder if the reason some people complain about the Wolfram language is that people who like it are constantly going on and on about how great it is.
Anyway, now we are ready for a slogan:
Everything we are doing comes down to parity and pariahity.
Slogans are fun, but I just put that bold slogan there in a shameless effort to promote my new word. In the course of trying to prove more advanced results we gained a sense of what we needed to be true about ThueMorse and then ran some calculations to verify that the statements were true as follows.
Then because we are doing math we formulate the statement precisely and prove them. We will need these lemmas shortly.
Comment: This is not how we came to conjecture the results below, but such “false” or “possible” discovery narratives or histories are nonetheless useful as a means of connecting ideas. This is not my idea and is straight from Polya in his book “Mathematical Discovery” where he compares mathematical pedagogy to biology citing the biological principle of “phylogeny recapitulates ontogeny” as a model for how to best teach students: give them some idea of the genesis and history of the ideas! In other words, do the exact opposite of what sometimes used to be fashionable in math classrooms: present the slickest possible modern definition first, and then show that the classical results are special cases. This may be logically efficient, but it is pedagogically “bass-ackwards”, as my dad used to say, and has never worked very well except perhaps for maybe a small and very select few. In short, if you’re going to talk about classical examples anyway--and you should-- for god’s sake do them first! This comment may not be necessary since “Bourbakism” seems to have fallen out of favor.
Follow up Comment: It has always struck me as so stupid to present the most abstract approach first that I have a rather paranoid theory about it: Some mathematicians see it as their duty to “weed out” those they perceive as unworthy. Ironically, these self appointed “guardians of excellence” are often not really such great mathematicians themselves--the great ones are probably thinking about math I imagine. I’ve known a few faculty that seemed to approach this “duty” with genuine gusto. Not surprisingly, the “weeds” often end up being the “under-represented” minorities and women, but I’ve actually known a few white male mathematicians who have recounted with real bitterness the gratuitous weed out efforts they were forced to endure. Anyway, my paranoid theory is that this sort of obviously terrible approach to teaching facilitates the culling process.
Follow up to the follow up comment: The above comment begs the obvious question: how was it that I survived the culling process? Many attempts were made. The short answer is that my parents, particularly my mother, did their jobs well. Also, a few of my early teachers and mentors played a significant role. By the time I arrived at the university, I was simply too strong to be culled. Some have called it arrogance I’m sure, another term, I believe, is “uppity”. It’s actually just hard won confidence. The faculty that do this sort of thing have no problem crushing the dreams of interested students by bluntly and gratuitously telling them to their face, “You don’t have what it takes to be a mathematician” for example. I never heard this one exactly, but I heard something similar and a bit more amusing. Of course, when directed at a fragile student, these assessments -- which, in the new terminology, are really micro-aggressions -- become self fulfilling prophecies. They are almost always done in private, so there is no evidence other than the word of the now broken student. Finally, to add insult to injury, you sometimes have to listen to the perpetrators of these crimes lamenting, “If only there were a pipeline” or, even more outrageously, accepting awards for their teaching and dedication to students. Perhaps it would make more sense if the award was for their dedication to “some” students. All of this phoniness sort of makes me want to vomit actually.
Comment: I first learned about conjugation from the article Douglas Hofstader wrote on the Rubik’s cube in Scientific American. A friend of mine told me that you could learn to come up with your own solving algorithm by following the ideas in this article. Those ideas are group theory in general and conjugation in particular. He is also the one who told me about the Ross program which plays a significant role in the approach I have adopted in this monograph. One conclusion to draw from this comment is that it is important for nerds to have some nerd friends. An additional possible conclusion is that “nerd” schools, or at least schools with a vibrant nerd culture, are important for the intellectual development of nerds. Finally, I would suggest that the people involved in running nerd schools should either be nerds or former nerds themselves or at least have strong nerd sympathies. Otherwise it all becomes a diluted mess and standards inevitably get lowered to the point that all you have is just another decent school -- what a waste! But it would now appear that we are going off on a tangent. That was a very bad nerd joke.
Comment: I am quite sure there are those that will take umbrage at my “diluted” comment, since I am unapologetically advocating for the notion that students with advanced training and interest in mathematics should have the opportunity to be grouped together. In truth, I actually don’t like the phrase “ability grouping” but I do like the phrase I made up “preparation and interest grouping” however. In my mind, the historical problems with academic tracking were the correlations based on race and class caused by racism and classicism. If these causes and correlations are removed, it strikes me as common sense that having students with similar preparations and interests grouped together makes for a better teaching environment. Apparently this is a controversial position but everyone regards it as perfectly normal that many college campuses have athletic dorms for the “student-athletes”.
Comment: That’s another thing I don’t get. When I was a kid nobody used the term “student-athletes”. It’s as though, over time, a new class has been defined using multiple inheritance. Everyone says that “student-athletes” are students first, suggesting that it is a derived class, but the instantiated objects of the student-athlete class seem to have additional methods not present in either parent class in my experience.
Comment: Measuring “ability” by using a number produced by a test is problematic and suffers from the “reification” fallacy: how do you measure “ability”? The answer is usually given tautologically as “That number which my test produces” and, by false implication, it is genetic and fixed. I encountered these ideas in the excellent but poorly named book, “The Mismeasure of Man” by Stephen J. Gould, among other sources, but in the modern terminology this is now called “fixed-mindset” thinking and everybody in education mentions this phrase constantly and obligatorily.
Comment: The above discussions that I am having with myself reminds me of the philosopher Neil Postman’s discussions regarding the importance of language in which he cites Orwell’s 1984: suddenly verboten words, new words, new meanings to old words, etc... be suspicious! It suggests that we should all run self-diagnostics like Data does in TNG to make sure our language “symbol tables” have not been altered against our will.
Note: We will do something a little bit different with this result. We will do the proof entirely using Mathematica’s input mode. We have alluded to traditional math vs computer code issues before, and this time we won’t do some computations and then type it up as text, we’ll just leave it all as code! You could call it “mathcode” or something but it already has a name: the Wolfram language of course! I’m not just being cute here, math formulas, properly written are legitimate Wolfram language statements whereas it would be quite difficult to write the argument below in C++ or Java. Despite writing the argument in code, it’s still a legitimate proof however since the code will be identical to what we would have written or typed. We tend to do this when we make mistakes and get confused (which happened) and then we erase all evidence of the struggle, but I want to show you how you can use Mathematica as an incredibly effective scratch pad once you get comfortable with it. This technique is particularly useful if you find that you are algebraically challenged or fearful of objects like summations. I mention this because I have had many students who either paid no attention or learned very little from their math classes in high school and then discover that they are interested in computer science--which is wonderful--but then they learn, to their chagrin, that they really do need to be good at algebra, manipulating summations, induction, and recursion in their theoretical computer science/discrete math courses.
We now prove our
Conjecture:
Note: This will be long but each step is elementary, and when the step is not clear, I will explain it.
We now switch to Wolfram language code. Again, we are doing exactly the steps we would do in a traditional proof, we are simply doing it in input mode rather than in a math text cell so that we can check our work as we go.
Now we break the sum up into even and odd terms.
That’s right! You guessed it telescoping sums again!!!! After this, it’s clear sailing. By the way, the reason you could see that telescoping sums might be coming is simple: how else were we going to get rid of the summation?
So we have shown that
It follows that
but we also have
which gives
which is the same as
which simplifies to
which is equivalent to
And we’re done! We strongly, suspect that there must be an easier way to do this but when you don’t see a clever path, it’s time to just roll up your sleeves and grind it out.
Using the definition ThueMorse, we immediately have the following corollary
Corollary:
Which leads to the following conjecture/theorem
Pf: First we recall our two properties
Then we apply Mod[ ,2] and some ThueMorse properties
So
Therefore we have shown that the following parity/pariahity identity
and our theorem follows.
Comment: I had an analysis professor that used to love to announce during lectures that “the diagram commutes!” for the most trivial observations. It was a good natured joke making fun of my interest in algebraic topology. Honestly, I thought it was funny, but I never convinced him that commutative diagrams can actually be useful if the categories have non-trivial functors acting on them.
Note that we can re-write the above identity as follows
But wait there’s more!
But wait there’s more!
If there is one result that summarizes the whole halving/doubling, parity/pariahity it is the following formula
We can derive two facts from this statement
and we have
Therefore we have
which we can re-write two ways and we call both doubling/halving identities
Finally, we can express the doubling/halving identity in terms of a commutative diagrams which is kind of cute.
A Generalization
A Generalization
Comment: Mathematicians specialize in generalizing. Unfortunately, this seems to annoy some students and frankly even some mathematicians. They seem to feel as though mathematicians generalize merely for it’s own sake or perhaps just to make things maximally confusing so as few people as possible want to study mathematics. Actually, the point and value of generalizing, in addition to widening the applicability of results obviously, is to make things simpler and clearer to facilitate understanding. Despite this, the most general framework is almost certainly not the most pedagogically appropriate for a first encounter.
All we need to do is take the above recursive code and wrap it in Manipulate replacing the parity scheme based on pariahity with our predicate predicateScheme which we can vary. We will also very crudely measure the growth rate of the various outputs by looking at the maximum and minimum slope of the lines determined by the points with the origin.
The pure functions listed are simply various, parity choosing schemes. We have ordered them roughly in terms of size as measured by our “slope norm”. Here is quick catalog of these schemes.
We will prove this by induction below but first we’ll discuss some of the intermediate schemes. Obviously there are many possible parity schemes in between these extreme cases. Two come to mind immediately: ThueMorse[#-1]0& which has been virtually the entire focus of this monograph and OddQ which produces the identity. We can see this easily as follows. Using OddQ comes down to
which implies
On the other hand we also have
which gives us
so we’re done.
and we’re done.
This in turn gives us
Let’s see how long it is taking to do these computations by using the Timings function.
Timings depends on your computer and what’s going on in your computer at the time you run it, so it isn’t perfect, but it’s giving us a rough idea that each time we double the data set, the amount of time goes up by a factor between 2 and 3 most of the time for these small data sets.
One might say to oneself, something “like recursion is slow, better to use the closed form formula we developed!”
This is a disaster! The closed form formula was the key for our theory but it is useless to actually calculate for even these small data sets as it starts out slow and quadruples every time. This should not be a surprise due to the use of the number theoretic functions and the summation.
This is substantially faster than the recursive version without caching.
In any case the growth in these timings are sufficiently fast that it would therefore behoove us to find a more efficient way to compute this sequence. Fortunately, we don’t have to re-invent the wheel. When we submitted our sequence to OEIS we found the following Wolfram language code from Robert G. Wilson again.
Once again the OEIS site proves to be a marvelous resource which, sadly, I only learned about recently. One nice thing that I noticed in the course of these investigations is that you often find immediately useful Wolfram Language code and sometimes it is the only code present! This shouldn’t be entirely surprising since ease of mathematical manipulation and list processing are deeply embedded in the DNA of the Wolfram language.
We can see that this is much faster than the code we have been using. Note that since we are using pure functions, we don’t need any named variables or scoping constructs
This is much faster than our previous methods. Let’s plot all four methods together.
We have color coded the graphs so that the red one, the slowest, is the closed formula version, and the green one, the fastest, is the Wilson code.
Now let’s look at some timings.
It’s terrible! We aborted this after about 10 minutes.
Now we can use Fold to implement the copy, flip signs, and translate process.
Now let’s compare the timings for creating the above sequence.
This code appears to be faster than the code based on taking differences of the positions. Encouraged by this I tried to Prepend a 1 and use Accumulate to see if I could generate the original position sequence more quickly by Accumulating this difference sequence as follows.
It turns out that the Wilson code is faster. Basically, and perhaps not too surprisingly, each piece of code is better at what it was specifically designed to do and trying to go from one to the other via Differences or Accumulate is time consuming.