The equation f(n)=n+n-1+(n-2) which applies to entire universe as compressive stacking has an important function in determining curvature because n-1=n-2+(n-3) an so on. In this was the solution to any information state is finite, but requires the inclusion of every prior state going back to x=1. Since for all practical purposes for our observations x=infinity today (even though it is actually an infinite distance from infinity in all other respects, this makes the determination of any quantum point impossible in o-space. Fortunately, g-space has no time restrictions so each build of the universe (at the equivalent rate of 1.07x10-37th of a second essentially is calibrated on a rebuild of every qunatum state each time it occurs.
Perhaps one of the most significant features of this stacked universe is that when the relative change occurs, no two F-series carriers will be exactly the same length, no relative change will be exactly the same and no SCT will be exactly the same as another.
This occurs via a simple process with so many changes that it appears to be an infinitely complicated process.
The idea of potentially like sized spirals to create the stacking mechanism ONLY "appears to happen" BECAUSE THE RESULTS ARE SO close together. The actual stacking process which I will draw and explain mathematically is not complicated until it is applied. Once it is applied...well that's different.
What we saw last was that history is necessary in order to determine current separation; which reinforces the conclusion taught by f-series stacking at the beginning, an idea inherent in all of this, but something considered in the cool of a gregorian monk chappel where the only answer to the god that lived there was that its acceptance as opposed to some steady state god seemed counter-intuitive due to Leonardo di Bionnaci.
The stacking function for these increasingly large spirals is believed to involve oppositely traveling spirals to a carrier, but these are effects and not actual spirals, or perhaps "results" and not actual spirals.
To attempt this solution for ct1 to ct2 is complicated for what we view as stable compressive states because the numbers involved to generate our perspective from ct4 or a ct5 universe are essentially infinite numbers to our limite capacity for handling them.
But the process is the same for changes in ct1 and so we're going to apply "dimensional" calculus to the non-dimensional calculus in order to show where the results come from and why the two give such different results.
The spirals are generated as set forth herein, then the results are stacked in order to generate carrier spirals. This process continues on so many different levels that it is possible that the compression states are generated when 256 ct1 spirals align at the same time to generate a single photon.
Alignment of space is an uncertain process but since it can occur based on any common feature, I'm going to start and the same process occurs at the enormous values of x required to get similar concentrations of ct2 to get to ct3 and so on as x increases towards infinity.
Notations:
y=fx: dy=df
f' (newton for derivative) =df/dx=dy/dx=d/dx(f)=d/dy(y)
omits x(0)
fx=x^n; n=1,2,3 [in our case we're slightly modifying this by making it x^2^n
d/dx(x^n)=?; df/dx=[(x+dx)^n-x^n]/dx could put in x0 at first x+dx
x is fixed, dx moves
binomial theorem: (x+dx)^n=(x+dx)*(x+dx) n times=x^n+n(x^n-1)dx+junk terms but since the change is actually finite the junk terms have weight that we experience as history.
Junk terms O(big oh) O(dx)^2 where this is dx^2,dx^3 and higher are important because they don't go away because there is a specific limit, the same end point that allows pi to have a specific definition in AuT, the same that allows for there to be a quantum state beyond which there is no separation because it breaks down to pure information.
df/dx=1/dx((x+dx)^n-x^n)
=1/dx(x^n+nx^(n-1)dx+O(dx)^2-x^n)
=1/dx(nx^(n-1)dx+O(dx)^2)
=nx^n-1 + Odx
tends as dx goes to 0 to nx^n-1
=d/dx(x^n)=nx^(n-2)
So for our equation (f(x) is d/dx(x^2^n)=2^n(x^((2^n)-2) + O(dx)^2
The difference is that we do not want to get a total or sum for the various answers or ignore O(dx) because that is where the history of the equation lies and because we have a non-linear answer with discreet quantum units which assure that at a certain number of places we get an answer.
Here is how this changes:
lim (as x goes to x0) [f(x0+dx)-fx]/(x-x0) and you cannot plug in zero because in a quantum universe you don't have zeros. The zeros do exist at the ct1-ct0 boundary, but these are not "actual zero" but merely the potential for a yes or no answer; the potential for a plus or minus answer. You can divide a maybe by a maybe and get one.
We have actual limits in our analysis of AuT but the limits change with each change in x and these limits affect both curvature and the change in information quantity. The arrangement of information also evolves since the curvature solution changes.
Derivative limits limits for AuT approach not zero but a specific minimum value of x which is so far from the current solution as to appear infinitely small or infinitely far away. We talk about a 13 billion year old universe, but AuT explains that the latest big bang is only the latest average inflection point between compression and expansion and gets us nowhere close to the actual beginning of the algorithm.
There are complicated questions that arise from the formulation of this algorithm is the question of what happens when you add two prior states to get the next state with an offset.
While one drifts to history which is the only place it can go there is no "new" state, but if enough occur at once then you don't see double the set, instead you see just a higher state.
Take a deep breath, I am still here.
No comments:
Post a Comment