The Kinder pre-AuT physics is not so stupid because on a middle level, we live in an entropy driven system, but our EDS is illusory just as self determination is. When you get caught up in the illusion you lose the symmetry of the system. Symmetry requires that our "perceived self determination" is only a tool to accomplish an end by an underlying system.
It is not "I think therefore I am" instead it is "The desired result of the algorithm requires these thoughts to accomplish this result which makes me think I am."
A careful study of AuT allows that by careful analysis it "backed in" to spiral theory and following that line of thought eventually stumbled upon the Algorithm Model. AuT then started with where we are and made a steady line backwards until an existing model (Freeman actually saw a stacking system of sorts in his calculations but they were tied too close to traditional physics, but otherwise he might have stumbled back into the AuT model) or at least an existing framework was discovered which could be adapted to the "backed in model." Once that was available, it was possible to go back to g-space again and begin taking that model outward which worked equally well. That's the sign of a good theory, one that works in both directions.
It provides predictability in terms of compression using numeric systems that we accept.
The complicated part isn't having the pieces because those are all required by the system. The complicated part is putting them together.
Fortunately, there are models that give F-series results using matrix type models of the type needed in order to create the illusion. A matrix yield an F-series can be viewed as looking backwards from a model that takes an F-series and "stacks" it in order to get the illusion of multiple dimensions from a single algorithm.
https://en.wikipedia.org/wiki/Fibonacci_number#Matrix_form look down till you get to the section entitled
Matrix form[edit]
A 2-dimensional system of linear difference equations that describes the Fibonacci sequence is
- This derivation is of little significance as a derivation, but as a mechanism for determining how individual solution can combine to give multi-dimensional qualities to define space and separation it holds some promise for this model.
- Leonardo Bonacci Sequences (Fibonacci Sequences) make sense primarily because when used with an offset and anti-spiral model you get pretty good results. Backing out from a boring set of curves to something stacked is where the modeling gets interesting. This matrix type of concept that might lead with more study to the stacking mechanism.
- In this section:
Divisibility properties[edit]
Every 3rd number of the sequence is even and more generally, every kth number of the sequence is a multiple ofFk. Thus the Fibonacci sequence is an example of a divisibility sequence. In fact, the Fibonacci sequence satisfies the stronger divisibility property[36][37]Any three consecutive Fibonacci numbers are pairwise coprime, which means that, for every n,- gcd(Fn, Fn+1) = gcd(Fn, Fn+2) = gcd(Fn+1, Fn+2) = 1.
Every prime number p divides a Fibonacci number that can be determined by the value of p modulo 5. If p is congruent to 1 or 4 (mod 5), then p divides Fp − 1, and if p is congruent to 2 or 3 (mod 5), then, p divides Fp + 1. The remaining case is that p = 5, and in this case p divides Fp. These cases can be combined into a single formula, using the Legendre symbol:[38]Primality testing[edit]
The above formula can be used as a primality test in the sense that if- , where the Legendre symbol has been replaced by the Jacobi symbol, then this is evidence that n is a prime, and if it fails to hold, then n is definitely not a prime. If n is composite and satisfies the formula, then n is a Fibonacci pseudoprime.
When m is large—say a 500-bit number—then we can calculate Fm (mod n) efficiently using the matrix form. Thus- ≡ (mod n).
Here the matrix power Am is calculated using Modular exponentiation, which can be adapted to matrices--modular exponentiation for matrices[39]- you get to some interesting models because (1) it works for larger number of the type you'd find where there was compression and (2) you get the type of exponential stacking that supports the observed results.
- If there is a problem with these models it is that I need an upgrade in my matrix mathematics which I'm not sure if I have time to finish. Perhaps that will be something for someone else.
No comments:
Post a Comment