Pages

Wednesday, April 27, 2016

The AuT capacitor-part V: The hard part, part 3-The Universe in a nutshell

Part III

The Universe in a Nutshell (to date).  This “brief summary”, really only the part through item, say 15, has to be taken in context with the overall work.  The idea behind an Algorithm Based Universe (AuT-Algorithm Universe Theory) is unsupportable without a great deal of background as to why that theory is attractive as a theory of physics.

1        Linear vs Curves, the compromise: A curve works better for a big bang type result.  The linear F-series works better to define the period of overlap and the resulting amount of compression/capacitance and decompression/discharge.  The gradual change between multiple universe may yield a curve so the combination of the two provides a mechanism to suit giving linear solutions a curved appearance, but only to the point where pi is no longer solved.  In this way the total information solution to pi is preserved based on averaging the results of linear spirals together to even out the answers..
2        At inflection points compression changes to discharge, decompression.  Compression is according to a scale formula formula of the type: x =Sum (n-(n-2) or n-3,2,1)^2^n or the equation compression number=(f series (sum(n,n-1,n-2))^2^n
3        The universes build up to the point where compression of higher states can occur but each spiral remains in place turning, compressing and decompressing at different levels and times throughout the process.  At certain points (intersections) overlap of the spirals is sufficient so that the next higher state of concentration occurs according to the state of compression.  It remains likely that there are multiple big bangs for each compression state (ct state) in order to arrive at sufficient compression to arrive at the next higher state.  Hence the equation for any one universe need not depend on what is happening at the state, but is merely a function of the number of spirals building and the amount of overlap and the resulting compression.
4        Our place in the universe appears to be 3-12 billion years after the end of the overlap and therefore during the decompression stage, but it remains possible, if not required by the formation algorithm that there are overlapping compression and decompressing states throughout the universe whether during a main compression or main decompression stage of the primary spiral or spirals leading to a big-bang type event.
5        U=n! where u is the number of quantum points of information in our (each) universe at any point made up of n! spirals from a building model where each spiral adds a spiral according to the F-series.  The size of our universe at any point in time in terms of quantum units is equal to the number of spirals that make it up.  This suggests that many spiral universes almost as big, with almost as much information, as ours exist but that each is defined by only two dimensions.
6        The universes are defined by the linear-F-series spiral sin function which can be solved as x goes from zero to infinity but for smaller spirals presumably stopping at a fixed number of spirals from 1 to the highest number for the universe under regard based on the maximum value of x being solved for in the algorithm.  A distinct coordinate is solved for each value of x for each LFSSSF which sequentially (as x changes sequentially) define a quantum point in our universe at any quantum moment.  The LFSSSF being: xI(from 0 to T)(gsin(pi/2x)x(FseriesFunction)^2^x)+(-gsin(pi2x(xFseriesFunction)dx where the FseriesFunction is equal to sum(n-2,n)n at least for CT3-4 and ct4-5.
7        The solution changes at overlaps so that any solution must solve for both the positive and negative spiral and at the point where the two intersect you have to define compression and a negative entropy.  Afterwards you have to define decompression and entropy.  This means that to the equation in the preceding item you have to add the “information capacitance” and “information discharge” equations into the algorithm of LFSSSF depending on whether there is intersection or not at any quantum point.
8        Beginning with x=0 to the total information in our universe (compared to the next levels of universes)
9        Compression at a point where the prior universe occurs together to allow compression which compression may include all sub-set states, but only achieves compression of the next higher state when there is sufficient concentration of the next lower state.
10   Equation for compression is a maximum of 3% based on the amount of overlap of other states and based on an equation related to the total amount of the next lower state. This 3% will steadily dissipate to a lower percentage after compression.
11   Beginning at x=0 and ending at the total amount of information in any universe compression is at a point where the prior universes align states to allow compression of the next higher state with succeeding states requiring exponentially more information and other lower states of compression also occur during the points of overlap
12   The likely equation for overlap deriving from the voltage capacitance equation is of the type:
13   V=I(tot(ctx))(1-e^t/rc) where rc represents a constant (as opposed to resistence and capacitance and would be related to the total amount of information available for compression); V would represent the amount of compression within the parameters set and I(tot) is the total amount of information and where there is such an equation for each type of information in the prior state so in a ct5 universe x would change between 1 and 5) so that the total amount of compression is defined by RC and where T is a length along the intersecting spirals being some compressed
14   Discharge and decompression would similarly be defined by V=I(tot(ctx))*V(x)/RC)*e^-t/rc where v(x) is based on the amount of compression before the overlap ends and t is the length of the spiral after the overlap.
15   In order to get to a universe with the complexity of ours build on solutions to twin intersecting linear spirals which are out of sync the building process (F-Series) for spirals should be the same as the spirals themselves.  Adjacent spirals would give adjacent solutions so it seems probable that the building of the spirals occurs at the same rate as the propogation of the spirals.
16   There are several ways to approach this, but the one that seems best suited to observation is an exponential building as opposed to a pure F-series construction and that this building world occur with each spiral quantum change, that is to say, with each change in x.
17   Using F as the designation of a spiral, it would look something like this:
x-1   F1
x-2   F1', F2 where the ' stands for the first change in solution.  F2 is being solved for x-2 but it is still being solved there for its beginning equation, i.e. F1 is rotated one unit relative to F2.  This can be accomplished by having the equation for f2 use x-n where n is 1 less than x.  This same type of change would follow for other spirals as shown by the next change
x-3  F1'', F3, F2', F4:  Here the exponential change is the same as the exponential building from the intersection of spirals shown in the drawings below.  In this case F3 springs from F1 and F4 from F2.  The solution for x in F3 and F4 would be x-n' where n' would be 2 less than x.
x-4 would result in F1''',F5,F3',F6,F2'',F7,F4',F8 with 5,6,7 and 8 having a n'' predictably 3 less than x.
18   This fairly simple model would allow for incredible complexity even at this level with the spirals interacting and offset.  As you get to the higher concentration states of each spiral the changes would become more radical.
19   Compression to photonic and wave energy might occur relatively early this this model, but given the need for proximate solution this result might not be allowed to occur as a stable state.
20   One reason for the lack of compression would come from early state changes adjacent to later state changes.
21   The point where the spirals began to turn would be almost immediate, but something should be present to generate the incredibly long spirals arms or lengths before turns that we experience, i.e. spirals that go through billion year compressions and billion year re-expansions/entropy periods.
22   Since there is no need to have a solution with a fixed amount of data for the algorithm, but there is a need for a fixed amount of information for any universe, it appears that the largest spiral has a solution that is defined by the lesser spirals which make it up but without the effects of the larger spirals of which it would be a part.
23   The size of the spirals in this model would not be limited and perhaps the easiest way to grow them would be to extend them by quantum length with each rotation.
24   In this way you would have the following result in terms of length:
0,1,1,2,3,5
1,2,2,3(4),4(6),6(10) etc with the parenthetical number being an alternative result.
25   In this way the length of the spirals grows exponentially for each spiral.
26   The F2 spiral would appear as follows relative to the F1 spiral in terms of length after this first period:
F1 1,2,2,4,6,10, etc
F2 0,1,1,2,3,5, etc
27   This growth patern in terms of the length would continue until there was sufficient concentration for a state change at which point the first 90 degree turn (based on sin(pi/2) would occur.
28   The exact method of building is uncertain, but this could, conceivably result in extremely long spirals before even the earliest turn began notwithstanding the fact that turns would necesssarily have to occur for exponential compression using capacitance type interactions.
29   While one thought might be that this would take a long time, that is not relevant since the solutions to any spiral equation are instantaneous.  The size of the spiral arms should be defined by the number of spirals down to full concentration or the amount of information in any spiral arm.  For example, if we knew that we were 10% of the way towards decompression, we'd know the length of the post compression arm and could figure out the total length, divide that by 10^-47th and have some idea of the total amount of information in the universe.
30   Likewise using gravity to figure out the total amount of information in the universe using a quantum value of gravity, we could figure out how far along we were on a given arm given the amount of information in each state.

These rough equations for the basis for a universe of the type we experience. Given the long term effect of even a small change in any spiral in this set up, the complexity rises to the equivalent of what we experience and it can be assumed that as I(tot) increase for all clock time states, that is as the next universe after ours is solved for the algorithm, the universe could change substantially from ours in everything other that the underlying method in which it is built/solved.



No comments:

Post a Comment