It is cool outside, a cool 80 something.
Perhaps you can see that the sun is slowly creeping its light down the tops of the trees.
It is my favorite time of the day during the summer, when it is comfortable outside, almost cold. I have my half cup of coffee
In case you wonder what it looks like. You did that to me, all those days alone or perhaps I did it to myself. It is done now.
I accidently took a picture of me, but its not being posted. I look like I am a million years old, which is appropriate for someone who sees a million years into the future, i suppose.
I missed swimming a 50 yard fly during my im workout which was bizarre, my defective brain somehow turned off its memory of it, i had to think really hard to make sure i had not skipped it, but the time was gone, and there was a flash of it.
I wonder how much time I have lost over the years
Book 4 has been published in its 4th and final (at least for now) edition.
I was able to whittle it down to 181 pages 41,700 words; more accurate and more concise.
Towards the end there were some pretty important parts, but I was also able to take out huge blocks.
Was I influenced by swimming 2500 yards in the sun to get done quickly? I don't think so. It was a studied combination of having edited the most important 100 pages several times and getting rid of what was essentially 150 surplus, repetitive or inaccurate information and rewriting what was kept.
A huge block of bunk deleted as out of date or redundant from book 4. It is covered in more uptodate form in books 3 and the 4th ediion of 4. If I am generally right, but certainly wrong in some of the specifics, perhaps some insight into me if not the universe or the process if not the theory can be gleened from what I cut out and which is largely included below to the extent not done earlier.
While I did not go over this in the detail it deserved, it appears to represent everything that was brought up to date in books 3 and now 4. I may, given time, find something in this that should not have been cut, but I feel pretty comfortable that just as book 3 was a very complete summary, book 4 has a great amount of theory and detail, including the most recent summaries of the information developed during and after book 3 was readied as an audio book.
No other edits are envisioned at this time, the next undertaking will be book 5, but I am not really close to picking that up yet. I will order some copies of this one when the print version is ready, probably by monday and i am going to take steps today to take down the longer version which is less specific in the last 80 pages. I am very happy with book 3, the coming audio of 3 and book 4. The audio largely fills in between these two volumes although it is necessarily trimmed and edited to deal with the limits on formula and figures inherent in an audio book.
1.
Dark Matter and Dark Energy-dark energy is easy since it is nothing more complicated the reflection of the amount of compression or decompression (in the early model unraveling spirals vs compressing spirals) giving what is currently an ongoing expansion which is actually estimated by way of calculation in Sprials in Amber 2nd Edition. It is an effect and dark energy will be overtaken by compression which will shorten the distance between compressed states by reducing space relative to compressed states and give the effect that gravity in outweighing what is called dark energy. The answer to the estimate is that in 14 billion years the universe will start to contract.
Dark Matter and Dark Energy-dark energy is easy since it is nothing more complicated the reflection of the amount of compression or decompression (in the early model unraveling spirals vs compressing spirals) giving what is currently an ongoing expansion which is actually estimated by way of calculation in Sprials in Amber 2nd Edition. It is an effect and dark energy will be overtaken by compression which will shorten the distance between compressed states by reducing space relative to compressed states and give the effect that gravity in outweighing what is called dark energy. The answer to the estimate is that in 14 billion years the universe will start to contract.
There is no matter, per se, in AuT but Dark matter appears to be nothing more complex than the reflection
of solutions for ct1 (space). The
localized effect of dark matter indicates something more complex. The
solution to the conundrum suggested by the strong force and the magnetic force
as seen from the perspective of AuT is that (1) dark matter is space and (2)
the compression of space in regional areas is an intermediary space state, much
like magnetism, that occurs in the presence of some higher ct state. It could reflect difficult to view minimal
ct4 states, clouds of electrons, for example.
AuT suggests that magnetism is
a temporary or transition state of
ct1 to ct2 in the presence of ct4.
Magnetism is a function of the F-series function where the substitution
rate comes from ct1 states that are solved within a narrow range. The narrow range is a spiral around the ct4
states with a high ct3 substitution rate, also known as current.
AuT suggests that this same transition state phenomenon creates
matter (ct4) in the compressed atomic
state (neutrons and protons) by way of compressive proximate solutions that
are reflected in the strong force. It presumably also creates the weak force as a further compressed
state of ct1 to ct2 magnetic compression.
The existence of even localized ct1 compression is not totally
unexpected although the exact states are not well identified. One example
of how this would be possible is ct1 in the presence of ct5 or even ct6.
The half life of the resulting phenomena can be quite long as indicated
by the strong force which continues notwithstanding a long period between ct5
exchange, perhaps as long as the period for recompression, which only begins14
billion years from now.
2.
Why there is more matter than anti-matter. The current state of the universe, as an
expanding and not compressing universe may provide a partial explanation. Also, offset
exponential growth of positive and negative F-series spirals generates an
imbalance. The length and number of spirals built on opposing f-series will always be much greater, exponentially
greater on one side especially at very high values for x.
This is a 161% greater
measurement. It’s worth noting that
information changes at a 200% rate. This
supposes that the f-series generation or informational generation forms two
different, alternating types of information.
For the intersecting spiral model, one would be solved for solutions in opposite
directions.
This works well to generate two types of information. If a positive and negative spiral
intersected, no information would be lost, but you might have a conversion to a
non-linear state, spontaneous decompression of both yielding wave energy, light
and space which we'd interpret from pre-AuT math as the destruction of the
particle even though we're actually just looking at conversion where the amount
of each type can be readily calculated based on how much wasn’t otherwise
accounted for being turned into space or ct1.
3. How big is the universe-This
is a non-question since the universe has no size, but the universe does have
information so the size of the universe is equal the amount of information in
the universe which changes constantly (but at a predetermined rate) and while
the amount of information is great, it is the result of the effect of a single
variable so that the true value of the universe is a function of the value of x
and its age in seconds times 1.07x10^39th although the true age would include
information in the universe before "aging" and before standard clock
time.
4. Are
there parallel universes? This is a pre-AuT question because parallel
requires distance which doesn't happen. While the information that makes
up the universe may be expressed in any number of ways that may confuse the
appearance, the idea that space is anything more than sequence of relative
solution is largely disproved by the appearance of a uniform size of the universe
from any point and any “parallel universe would actually be a separate equation
that would have no effect on our algorithm.
Chapter 38 Quantum
information theory and traditional physics
Force is a
result. For example, it is speculated that when compression results in
long period of ct4 super compression
in the presence of ct5. This is solution based, in high compression
states, ct5, you get super-compression 1111 to 11111 coordinate solutions and
these last as the ct4 states move away leaving them as compressed ct4 which we
experience as neutrons and electrons.
This
transition is based on the necessary crowding
of substitution states and is probably the strongest evidence that, for
example at ct2-3 (photon to wave) the substitution is not purely ct1
substitution but ct2 substitution in waves.
I.E. substitution in higher
states involves at least temporary
super compression states of the next
lower state. Time dilation, on the other hand, shows that external ct1 substitution rates remain important in the complete
management of math solutions. In
magnetism, as soon as the “current is shut off” the related spiral of ct2
temporary states also disappear, remembering that this is a math result and not a force consequence of shutting off the
current.
It appears
that the same type of state substitution occurs during fusion reactions to some
extent, but more likely this is an after effect of the generation of ct4 super compression during very high black hole type compression state
that occur, for example during the big
bang inflection point events. The
continuation of these nucleus super compression states indicates that for
higher compression states, the temporary super compression states last longer,
presumably because of internal sharing
which leads to history and time dilation in conjunction with external
ct1substitution. Whether very large
black holes reflect the same effect in the presence of as yet undetected ct6 or
whether these very high compression states can generate their own super
compression (e.g. by fusion in stars in ct4) remains to be determined with
specificity, but can be determined by examining the results of compression in
fusion reactions in stars for ct4.
When you get
ct1 linearity in the presence of massive overlaps of ct1 state you get
comparative states between ct1 and ct2 and the result of these separation
states is that you experience gravity which is more pronounced as there is ct1
sharing between higher compression states.
Again, this is solutions based resulting from math solutions that
generate ct1 substitution and not force driven.
Space and
the effects of space affected over long periods of time in the presence of high
compression may easily account for dark matter but this is only necessary to
the extent that dark matter is compressed in certain circumstances.
The idea
that light is bent in areas where we believe there to be concentrations of dark
matter potentially reflect intermediary states present with longer lifetimes.
The fact that it may be ct6, super compressed ct1 (space) resulting from
earlier compression in the presence of a high ct state or some other result
doesn’t change the ability of AuT to provide an explanation if enough data can
be obtained.
In
circumstances during very high compressive states (whether localized or during
a big bang) super compression of ct states appears to occur and these
compressive states continue and continue to circulate as magnetism and similar
mathematical solutions as the combined high compression spirals begin to
disperse during the phases of decompression.
A lasting
effect in the form of dark energy is net decompression which will eventually
change to net compression lest the universe continue to expand exponentially.
To call dark
matter a particle is nonsense just as looking for bozons is nonsense. It
is attempting to force non-dimensional mathematical solutions into dimensional
particles.
While we see
dimension, it has nothing to do with dark energy except that this decompression
state is a result of decompression and exponential growth of F-series solutions
of ct1 without offsetting higher state compression. When the higher state
compression is greater than the F-series growth, you get a compressing universe
which approaches the big bang inflection point.
Since the “length”
measured in quantum states between
these “turns” or “carrier lengths” at each carrier state
is exponentially higher (almost but not quite double) at each change, the carrier
lengths between turns have epoc type lengths and
intersections between intersecting spirals or other compressive solutions
instead of turns can play a role in “reversing” the compression/decompression
solution theorized in the primary model.
Substitution becomes a replacement for
velocity. 1:256 is light speed. It is noted that in a slow changing
ct4 state, the rate of ct1 substitution remains at 1:256 for the ct1 states
that make up the slower carriers at the remove of 3 steps (ct1 substitutions,
ct1-ct2, ct2-ct3, ct3-ct4).
We see the
rate of change of ct2 as energy and we see ct1 as having more energy than ct2,
but the truth remains that these things are just different forms of
information, more specifically different solutions with different substitution
rates, the substitution rate for space having not context in time or dimension
and therefore difficult for us to understand until it reacts with higher ct
states to give a spacetime relevant result.
To say that
means that space has "more energy" than anything else is inaccurate,
but it also has a grain of truth from our perspective because we see relative
speed of information exchange as energy and from that standpoint you can say
that space either (1) has infinite energy or, more accurately (2) that the
effect that space has on ct2 gives it the maximum energy that we experience
(the speed of light movement) before AuT.
Likewise, to
say there is more of ct1 than anything else is a tautology because everything
else is made of ct1.
The
mathematical solution or algorithm solution for the 256 states aligned by one
ct1 carrier define a quantum photon or ct2 state. At first glance, it
appears that the one ct1 carrier state is added to 255 “carried” states to
reach the 256 magic number and that as x changes, each time x changes, there is
a substitution so that ct1 is always changing at light speed although other
solutions are suggested by the mesh created when higher states are
formed. It can be noted here that ct1 has no internal substitution (at
least none dimensional) so space does not experience velocity as we understand
it, but does have solution order allowing it to provide separation without
movement.
Space varies
along with x (the single variable). The suggestion of exponential change with
each change in x is hard to escape without the math of f-series addition being
altered.
39 GRAVITY AND TIME
BECAUSE GRAVITY IS THE UNDERLYING FORCE GENERATED BY
LINEARITY AND BECAUSE TIME AS WE EXPERIENCE IT ONLY ARISES AFTER COMPRESSION WE
BEGIN WITH GRAVITY AND TIME
Chapter 40 The equations at the
beginning of time:
The evolution of a state from
non-linear to linear may be represented, using zero prime as:
F(0’)=(0,1,1)
plus (0,-1,-1)=(0)^(2^0) in the F-series
This suggests
that under the algorithm 0’ oscillates between 1 and negative 1 in quantum
instances. -1^x satisfies this with the
caveat that for the universe to function as we see it each solution
survives. Order of solution or relative
order of solution is important to compression.
The fundamental building block becomes a series of 1 or -1 solutions.
This suggests that for any finite
value of x, 0’=0 when summed but there is an imbalance that has to be built
into the system lest it collapse, fpluspix.
(1) An offset is required (in solution order) in
order to prevent the universe from collapsing in on itself and that the offset
is inherent in the value of fpluspix and evolves to the F(n)^2^n equation as
the fpluspix solutions are summed over time.
(2) There is a F-series type solution where
you have simultaneous – and + underlying solutions along with two (or more) coordinates
changing at once as moving from 1 to 11, 22, 33, 55, to 111,222,333, to etc.
Hence 0’ is the
only base solution that exists. The plus
gives relative positions in terms of a quantum solution based on the
perspective or solution order.
Fibonacci Spiral value is governed by
the equation: [(n=(n-1)+(n-2)] This
equation also balances positives (adding) with negatives (prior solutions as
negatives):
pi=n(sum(1
to x)[(-1)^(x+1)]/(2x-1)
This
allows a substitution:
0’=(-1)^(x+1)
Pi=sum(1
to n)n(sum 1 to x)[0’/(2x-1)
The
preservation of solutions for pi(x) like the preservation of solutions for
fpluspix allows at the value of pi0 to pi1 ratio for the equation to be “the
equivalent of” f(n)^2^n from f(n)^2n-1 and the equivalent solution takes over
for higher states of compression.
Pi has
a fixed value for the universe over all values of x and all states of
curvature. It is a sum of these sum
solutions or sum(1toq)[sum0toq(n/(2+q)) plus the negative spiral solutions
adjusted for solution order.
a)
The equations defining pi
for different ct spiral states are expected to look much like this although the
size and direction of the phase shift may be different and may represent the
source of offset spiral:
Ct1
-1+1/3-1/5+1/7….
Ct2
2-2/3+2/5-2/7…
Ct3 -3+3/3-3/5+3/7…
Ct4
4-4/3+4/5-4/7…
Ct5
-5+5/3-5/5+5/7
A more traditional view where there is
no negative/positive transition of pi is used and this one is provided as an
alternative view.
Ctn=Geo(F(n)^(2^n) where 2^n results from
going from 1 information state change F-series solution to greater simultaneous
solutions 1 to 11 to 111 to 1111, etc
F(n)=(n+n-1+n-2)
The equation looks
like state(n)+carrier(n)=comp(n-1+(n-2)).
Compare
comp(-n=n-1+(n-2)) where compare generates the compression state of the
positive and negative states. These
values of negative and positive spirals are half the result each, but their
relationship giving rise to different compression states reflect whether
gravity is positive or negative. The
discussion of pi above and its component parts provides an alternative for
creating positive and negative states.
This
non-dimensional equation leads to the dimensional equation (m1*m2)/d^2 when
elements are joined and then separated.
This relates to the exponential relationship of forces of necessity.
Dcomp/dx=dcomp1*dcomp2=mass
and compression states of (state1*state2)/dimensional relationship of 1 to
2*dimensional state of 2 to 1. The
effect of gravity relates to ct1 sharing between the two.
a.
Discharge
and decompression would come from a similar equation. The original model came from the capacitance
and discharge solutions which focus on the e^-q where q relates to some
variable equation which is discussed previously.
b.
When the initial discussions of this topic were undertaken,
capacitance equations were looked to as models:: V=I(tot(ctx))*V(x)/RC)*e^-t/rc
or C(t)=E(1-e^-t/RC) and D(t)=QRCe^-t/RC. An analysis shows that the results actually
obtain from the shift values of the underlying fpluspix solutions for ct1
states which give rise based on different compression states to all the
positive and negative features we experience, including gravity and dark energy
which are not seen together only because one overwhelms the other locally and
the “net” values at higher compression states express themselves differently
because the solution result becomes important.
c.
Solution result refers to the fact that (Figure 1 above) fpluspix
takes over for -1^x, and f(n)^2^n takes over for f(0)^2x-1 as the effect of the solution becomes the next
step in evolution over the solution itself.
This function of the process of preserving solutions is critical in the
universe. Nevertheless, the underlying
values remain in place so that fpluspix net values for the entire universe
drive its expansion and contraction and ultimately all of the other features.
d.
The offset between values gives rise to the features of the
universe in bulk including the appearance of offset intersecting f-series
spirals which led to many of the underlying features of the universe.
Figure 61
offset spirals
There are
256 ct1 states in a single ct2 state corresponding to the change from 1 to 11
and the consequent change from
F(n)^(2^n) where n changes from 1 to 2
at one unique spot. That unique spot is where n=2 for that the equation becomes
4^(2^2).
Informationally, the results are the
same. The 4 sides come from F(n)=4 but this is only a first relative
dimension.
However good the reasoning, the
relationship to information theory is impure because of zero.
Zero is not just positive and
negative, it is a function, like Pi. To
start direction it stays fixed, to form pi it changes.
. To understand the range, you need
to look at this chart:
Figure 62
This
indicates that at x=3 you can have 4 spirals based on F-series stacking.
But that isn't the point, being an old
point and only give to put some perspective on the simplified drawing.
What
you see is that 0' at x=4 is not a simple sum. Instead the quantum moment
is represented by the relative position and overlap and F-series state of the
different units. In this case, ignoring all other curves it can be said
that at x=4 the following occur:
1) S1 (the first spiral) has an f-series
length of 3, s2 has an f-series length of 2 and s3 an F-series length of 1.
Not only is the angle of each (figured from the evolving sin equations
for each level of compression) are slightly off from one another from the
initiation, but these inherent (not actual) angles have become more complex as
they revolve around each other. In addition, ct2 has overlapped a prior
location of ct1, but this is probably not important unless it occurs along a
quantum length of ct1 of an "active" length as opposed to a
historical one.
2) This actually happens at the point that s2
overlaps s1 the second time and there is a relative change at this point which
would allow for the aging of the two spirals were the concentrations sufficient
and the length of overlap more than a point overlap and no spark occurs because
there is no photonic energy nor sufficient compression even possible.
3) What it does show is that when we talk
about "adding" and "equals" in mathematics, those terms
have different means in absolute mathematics, because of the survival and
evolution of the underlying solutions.
Chapter 41 Time in a bottle 7
summary
1) The basic universe is built on a Fibonacci 0,1,1,2 type base algorithm
which are built on a fpluspix cycle evolving
between ct0 and ct1. The resulting ct states are further defined
as compression ratios giving rise to
intersecting algorithms and each are
offset by an evolving value of pi and curvature (represented by the sin equations
here) which for any point in the universe is a function of solution order, solution density (compression), and the extent of positive and negative functionality built
into the solution of quantum values of compression according the broad
definition of pi broken into its
component parts comprised of positive and negative quantum solutions separated
by solution order.
2) Force
and matter are a reflection of and not cause of the
algorithm. All secondary quantum are
formed by adding the prior quantum, then compressed states, together as an
F-series with each having a resulting carrier f-series. The entire arrangement allows for the
universe to vary according to a single
variable.
3) Space
is a dimensionless matrix
known as ct1 separated by time and dimension independent solution order. All dimensional aspects including force
result from relative solution order and
solution density relative to the nondimensionalize ct1 space.
4) At certain concentrations, multiple lower ct
states (here ct1) stack or change in common to form F-series stacked or
compressed states (11,11,22) which are initially photonic cells associated with
two places of solution at one time with different scales of compression or
linearity for each place/state.
5) The
stable concentration is also according to an f-series algorithm F(n)^(2^n)
where n=2 and F(n) in this case is (n+n-1+n-2) or, here, (2+1+1).
6) This method of compression associated with a
reduced number of ct1 states 111,222,333 for ct3 (wave energy) and so on for
matter ct4 (2222,3333,555), black hole material ct5 (abreviated3+5+8) for f(n),
and so on.
7) Intermediate compression forces are present
which create transitional states having the effects of the next higher state
for the next lower state seen, for example, as magnetism for ct1 transitioning
to ct2; the weak magnetic force for ct2 transitioning to ct3, and the strong
nuclear force for ct4 transitioning to ct5.
These intermediate forces can last for different periods of time and
presumably in higher compression states can occur when the higher ct state is
no longer proximate in solution location.
8) Separation and Curvature of space is defined by
offsetting by solution order and by the defining series for pi (fpix) which
creates alternating positive and negative solutions which interact to converge
or diverge from solutions of near solution order common density.
9) For any point in the universe pi at any quantum
instant is based on the number of applicable x states in the universe, the
level of relative compression and the separation of solution order.
10)To determine compression and expansion of the
entire universe inflection points are defined by net expansion or compression
at an inflection point changing with the continually changing separation and
curvature of space and its component higher ct states.
11)The universe does not completely compress or
expand but moves according to infinite converging or diverging series
represented by the changing value of pi for different ct states and the
f-series diverging growth of information and the F(n) function compression of
information.
12) Coincidence (apparent since there is not actual
coincidence) is generated by individual spiral solutions within the overall
state matrix changing through replacement of mostly non ct1 compression ct
states.
13) Apparent randomness results from the
replacement within a compressed ct state as fpluspix for underlying points
shifts leaving the matrix in the same fashion as coincidence. Coincidence and randomness are only apparent
and are only two different names for the same thing.
14) These changes happening on quantum scales of
ct2 or even ct4 are largely undetectable except as they affect solution the
F-series over a long period of the change of x, but both of these generate an
illusory randomness in the flow of events.
The fact that they are specific mathematical solutions to an algorithm
resulting from changes in a single variable eliminates true entropy which is an
illusion based on the net decompression state of the universe and will be
presumably be replaced by anti-entropy (net compression) when the universe is
in a compression cycle.
15) Information is consolidated in a very
inter-related and complex fashion at least from ct1-ct5 (or 6). Further
at net inflection points the universe as a whole goes from net expansion to net
compression allowing for a repeated "big bangs" with greater amounts
of compression at each subsequent bang as the amount of information increases.
It remains possible that each addition of x goes through a complete
cycle, but it seems more likely that the net inflection points result from
enormous changes of information based on continually building a new universe
based on the two prior universes in a reflection of the F-series equation and
that the extra information is visible from quantum points as velocity and historical
references which would not be possible in a quantum universe built any other
way.
16) Inflection points (beginning with fpluspix
shifts) mean that at a given point the universe goes from net expansion to net
contraction. The most likely model is a version of capacitance although
the driving feature has nothing to do with storing charge and everything to do
with spirals compression and expansion related to opposing intersecting
spirals.
17) All ct states are necessarily informational in
nature allowing them to exist but be dimensionless solving the Paramedian
remedy of what happens when you keep splitting distances in half; eventually
you get to "yes/no" and a beginning through a quantum event.
18)The geo or pi function is not dimensional but instead
refers to which path along which carrier series arrive at a solution for a
higher ct state is solved. For example,
an almost linear pi is returned where near light speed relative solutions are
found for ct4. These are being solved
for common ct3 carriers so that ct3 itself is changing at the “turns” in the
F-series spirals, but ct4 (the fourth aligned F-series (111-1) is traveling
relative to the ct3 spiral quantum changes.
If it was 1,1,2,3,5 and it was traveling along the 3 it would be 3
changes to one of ct3’s changes. There
is no true pi or dimension, only an illusion generated from this quality. CT5 (1111-1) would see things as 4
dimensional, but there would be no 4th dimension, it would only be
traveling along or relative to ct4 spiral quantum changes.
19) One feature suggested is that ct3 would have a
universe that appeared two dimensional and ct2 would have a one dimensional
view of things. CT1, being the relative
state of information would not have a dimensional time view but would only “see
things” from the changing value of x.
There being no “stacking” at the ct1 state, there would be no history at
all. History, like time and dimension is
merely an illusion and the illusion becomes more pronounced with increased
compression.
20)
The “length” of spiral changes necessary to
achieve the type of relativity that we observe is very high which is why
shorter term universes either don’t have higher ct states or they don’t matter
much because they do not last long. The
series 1,1,2,3,5,etc gets large rather quickly, but to get a post inflection
point universe in the expansion stage that last 16 billion years at 1 change
every 1.07x10^-39th of a second (16b*1.07x10^39x60x60x24x365) you
have to have half that many expanding inflection points at the very end of the
universe (or it wouldn’t have kept expanding for so long) which means a value
of x at the termination of the universe with a stacking function that creates a
false sense of durability due to the enormous size of “our” post inflection
point universe at the point where the next post compression inflection point
solution begins to take the universe back to the big bang point of the
expansion inflection point. This doesn’t
change “age” but if the F-series expansion of information holds true, the
change is x is enormous. The conceptual
frame work of these enormous changes is that the % of space gradually decreases
towards but never reaching zero in an infinite converging series, while the
amount of space steadily increases, the difference being taken up in steadily
larger ct states in a infinite diverging series both based on the F-series and
giving the impression that we currently hold of size (even minimum size like
Planck length) and dimension (two sides of the same coin) and movement.
Chapter 53 AuT in a chapter
The basics of Algorithm
Universe Theory (previously non-linear time theory), are (1) All linearity is
the result of conserved. quantum information coordinates states fixed in a
non-linear time environment for any value of x (one variable), (2) that
dimensional aspects of the universe (time, space, energy and matter) and forces
(energy, gravity, clock time) are illusory conserved, consecutive,
exponentially separate groupings based on resilient solutions for prior quantum
states of the universe forced into new configurations. They are resilient
because they are used in the solution of succeeding solutions so that each
universe is a factorial of prior universes. All states of existence
(space, energy, matter, black hole material, etc) are formed by exponentially
compressed states of information according to an algorithm defining compressed
information states. (3) We live in a pre-determined universe where our
“apparent” self-determination lies in slowing down change within a middle range
of time states where all dimensions and time states, like clock time, only
change in one direction at the same speed but with relative speed differences
based on concentration and the intersection of other time states where the more
lower order time states that affect a given quantum time, the faster it appears
to be changing notwithstanding the fact that all changes are by single quantum
steps. (4) A mechanism of internal accretion, like FIBRUCCI spirals and
particularly intersecting spirals providing for spiral mitosis and compression
are suggested. (5) Space time is derived
from the relative position and movement between non-linear ct1 manifested as
empty space and higher clock times based on shifting from compression to
decompression in the sense that the alignment of states along carriers of same
status states is disturbed during decompression. These lines broadly set out the field of inquiry of this
book.
All of AuT is first
generation theory, so every sentence should have added “in theory” to the end. In
order to create continuity, this requirement is not written into the discussion
so that several potentially inconsistent theories are presented without
distinguishing them except as alternative approaches within the paragraph or
section in question. One example of this lies in the expression of
“compression” theory (e=mc^2 being the best known pre AuT embodiment).
This is discussed as information theory, where forced changes in the
algorithm solution gives rise to forces, as spiral math with a capacitor
type interaction, as time orbits and as vibrations with one direction having
force characteristics and the other dimensional characteristics. There
are at least 4, not entirely inconsistent, models that apply.
One goal of the inquiry is
to define the simultaneous aspects of coordinates viewed from one quantum point
forward or backward. This includes the character of matter, dimensional
characteristics, anti-matter, force characteristics, dark energy, etc. To meet
these goals, AuT attempts to solve existing equations in mathematics for a
non-linear time environment and eliminate resulting inconsistencies.
Under AuT, there is no
space-time. This means that "space" can be quantized, but not
in terms of existence. Instead it is based on quantum non-linear
coordinates or bits of information and more specifically is the solution to an
algorithm for one variable.
What we perceived in ancient
times (pre-Einstein) as a medium is not anything. The same quantum time
(possibly a single quantum of time) creates everything expressing itself as
time coordinates (CT) grouped together to form points, P1, P2, etc. What
we perceive as a "separation" of the quantum fluctuations is nothing
more than the changes of quantum bits through durable building block universes
that stack for each successive increase of a single variable (x).
Space, energy and matter are not things; they are only
manifestations of change. To think of combinations of matter, energy and space,
you have to think of changing in the same way and at the same rate. Space
is just a manifestation of coordinates or data points, just as energy is a
manifestation of compressed space and matter a manifestation of compressed
energy (observed) and in particular are “s of changes from one math solution to
another.
AuT suggests that the things
that we see as "outward" forces might be the anti-forces to
“compression” which is the name given to an increase in simultaneous or
“coordinated” coordinate changes. When you force dimensional characteristics
one way the corresponding force is created although Newtonian concepts don’t
translate perfectly into AuT.
Spiral theory suggests that
Gravity is a net capacitance of spirals, anti-gravity (dark energy) is a net
discharge in terms of the stacking of spirals and more particularly Gravity
could be seen where the approach of spirals towards the corresponding
“anti-spiral” and Dark energy would be the area immediately past the point
where the overlap occurs. Other features suggest it results from the change
from non-linearity.
The basics of Non-Linear
Coordinate theory, and of this inquiry, are (1) All linearity is the result of
conserved time coordinates changing in a non-linear time environment where,
because time is nonlinear, everything happens at once (replaced with AuT- or
algorithm controls) and (2) that dimensional aspects of the universe and forces
(dimension, energy, gravity, clock time) are created in (a) conserved, (b)
consecutive, (c) exponentially separate groupings based on the number of (a)
changing (b) two state coordinates (+/-, yes/no, 1/0).
Prior theories are caught up with a
background of “randomness”, “self-determination”, “space time” or some other
variable matrix. Certain investigations focus on “fundamental quantum
theory” which is less malleable and then try to get from this concept to “space
time”. The flaw lies in assuming the two are separate. AuT
recognizes that Time linearity, or informational perceived change is only a
reflection of fixed information coordinates grouped or summed to give a historical
perspective.
SHT and NLC abandoned
predecessors were, at their core, Parmenides and Einstein concepts.
Einstein predated Hologram theory, but couldn’t give up space-time. He
was certainly aware of Parmenides, but didn’t have a reason for abandoning his
own existence. SHT recognized predestination and an environment where
everything happens at once, but this early version of AuT was willing to
envision existence within space time.
The combination of Einstein,
Hologram theory and EHT allowed the advance that AuT represents. Einstein did
not surrender space-time, despite having indirectly recognized predestination
and nonlinearity, but he did not have hologram theory suggesting that space be
abandoned.
AuT starts at non-linearity.
It moves outward from a specific starting point which gives it an
advantage over other theories which haven't been able to get to the starting
point. It isn't "the beginning"; but CT0 (Clock time zero
linear time) is a mathematically understandable point before the first
"big bang."
The examination of our
origin in a two dimensional hologram is an example of the difference between
observation from CT4 and an attempt to theorize an observation from CT0.
There is no explanation in the former SHT of why there is a holographic
image and on close analysis the whole idea of an independent hologram is
contra-indicated, although it does appear that we are the result of a
projection of linearity (everything happening at once) from a non-linear
infrastructure (algorithm).
EHT was based on the idea
that everything happens at once and that this had to come from a singularity
without time. This meant that information was finite. That is no longer
certain. NLT said that the singularity had time, but it was non-linear and went
further to indicate that everything we experience, from dimension to energy to
time itself is merely an expression of different phases of coordinate change.
Again, time was finite and problems of “built” quantum moments is still
possible, but complicated. Direction of time became more important with AuT.
Because NLC starts from a
pre-space/time point it gives insights which are not available to other
theories.
The non-mystery
of gravity
The average rate of change must be differentiated from the
instantaneous rate of change dy/dx which is what AuT focuses on. AuT has an instant relatively and absolute
change depending on whether it is x or ct1 relative change that is being
considered.
(1) If the minimum size for ct4 (one lasting for 1x10-37th
of a second) is a ct3 carrier with a length of 10^2^4 and if this reflects a
change relative to ct1 of 10^2^4*6^2^3*4^2^2 and (2) if there is a substitution
for velocity between states based on changing ct1 for another length at the
maximum rate of 1:256 (ct1 to ct2) then one can see: a) Stability is offset by
speed and b) speed (substitution of ct1 at a maximum rate of 1:256) is directly
proportional to the comparative rate of change between ct4 and ct3 (i.e. as the
carrier gets longer, the substitution rate goes down slowing the velocity AND
changing the ration of ct3 to ct4 thereby changing the amount of time dilation
which is merely the ratio of ct3 change to ct4 change in this instance because
of the stacking formula.
The underlying algorithms have no dimension or time (sct).
The transition to the visible dimension and time (sct) occurs as
we move from ct1-ct2 and then only as a relative illusion starting at 1, then
2, then the 3 that we see. The reason for this has nothing to do with
true dimension. While ct5 would see the world as 4 dimensional, there is
no extra dimension.
The compression/expansion solutions and F-series stacking
interaction leading to gravity and anti-gravity effects is not very troublesome
mathematically.
The relativity
allowed by comparing the non-dimensional ct1 to other states made by the
expedient of stacking (1,11,111) information leaves room for the other forces
to be developed along the same methodology, creating relativity, while not
increasing the underlying equations.
The reason is that each "dimension" is nothing
more than an information group changing relative to the next lower information
group along the individual solutions (1,1,2,3,5-changing, for example 3 million
beats represents 1 beat for the lower state).
For this reason, any model that discusses dimensional
characteristics related to the universe is, to any important extent flawed in
the first analysis.
For purposes of creating the
transition, drawings are used for ct1 to show how much dimensional characteristics
related to non-dimensional underpinnings. Quantum solutions mean that
there is no "film" but there is a picture which contains in layers which
are hidden in the solution in ways that appear irrelevant in state, but which
represent history including changed position for each quantum moment which all
exist for any quantum point and all of these points can be solved and while
there is connectivity through singularity, predictability through solutions to
shared ct1 proximity is also present.
CT1 does not need to have intersecting spiral arms in the
definition, but the net effect is the same.
While there are alternatives, this one allows solutions have to change
from state to state. For purposes of simplicity, two spirals solved
together are assumed and the pictures of this state showing a zero point with
two, one length yes or no arms coming off of it makes a good model. The
"higher" ct-1 states occur when this solution allows the two yes/no
arms to come together so that instead of maybe yes, maybe no, you get maybe,
yes, no (or maybe no yes either is the same). Hence algorithms where you
never get past this maybe yes, maybe no does not create a higher universe with compression states.
You correctly surmise that there is no dimensional element
just as there is no "true" yes/no. It can be alternatively
envisioned as 1/-1 but both analogies fail because there is no truth in this
type of universe and no dimension for a negative and zero doesn’t exist as
nothing. These
nomenclatures are just ways of solving the algorithm. Stacking and the
geo function offsetting results allow that you can get the otherwise impossible
transition from a no to a yes or a negative 1 to a positive 1 without passing
through maybe or zero. “Maybe” becomes zero and is circumvented by
manipulation of the algorithm and this simple manipulation creates the ability
to have stacking of states, f-series growth, relative change and the universe
as we experience it.
This origin states
must not be confused with the big bang, it is both more monumental, much
farther away, and less explosive (or more explosive depending on your point of
view).
The ct1-ct(higher)
space is easier to envision by looking at ct1 as two arms off a non-linear base
(maybe/0), but these "arms" have no dimension as we understand it. They are, however, vibrational in
concept. One will give rise to the positive arm of the algorithm, the
other will form the negative arm required for compressive and decompressive
solutions.
It is possible to envision an algorithm where the positive
and negative portion of the equation doesn't exist past the ct1 non-dimensional
place and the solution that arises for later (postive only) arms get their
negative aspects from this missing arm. Somewhere the intersecting spiral
solution must have the negative portion to satisfy, for example, the expansion
and compression portions of the algorithm which extend past the fusion/fission
characteristics all the way to infinitely compressed information states
(according the the f series model 1,11,111,1111,etc to infinity), the expansion
and contraction of the universe at inflection points based on the sum of the
data.
This shows how a
dimensionless result for ct1 (space) evolves to create ct1 F-series
intersecting carriers with positive and negative intersecting Fibonacci
spirals.
The next state will be stacked on top of this one as shown
in the figure in building an Algorithm 1.
Presumably, in keeping constant, this will result in a type of
“super-carrier” which carries all stacked carriers. Another view of this is shown in the figures
below where the build out is handled slightly differently and the exact
mechanism is important because a small change is this angle is where randomness
for defining the universe changes, not because it is "a random event” but
because the exact formula for the algorithm defines what will subsequently
follow.
The “jump” in this example (the other proposed jump occurs
to the left and right of the two open arms below the 0 marked maybe or 0)
doesn’t really seem like a necessary step so you might ask, “why?” The reason this is included is because it is
part of the F-series. The How and the
underlying Why are slightly different questions. Both are answered by solving for an algorithm
that provides for a starting point and
ends with stacked F-series carrier spirals.
It is entirely “likely” (not
just possible) that at the ct1 level the carriers are made of pulsating
0,1,0,-1 spirals (maybe, yes, maybe, no).
This is a lot more
important than you might think if you look at how the “stacked” final drawing
looks. It is 0,1,-1. The next line can
be envisioned as a 0,1 but it too might have a negative leg 9 not shown) and
this is much more consistent with the idea of stacking.
The “proof”
(ct5=F(5)^2^5) of scale (16^16) for ct5/black hole material suggests the
F-series 1,11,111,1111,11111 but this “compression” means that the compression
at each stage if the F-function of n+n-1+n-2 while the carrier F-series
function (1,1,2,3,5) has to increase at least at the ct1 level indefinitely and
both these functions are maintained for some purposes in order to get the
results we experience.
Because of the
method of building things out, everything past is preserved, everything future
may be predicted once the exact formula is determined.
This is shown in
the drawing in the next chapter where the carrier for ct1 is built from these
triangles but could easily be built with “V”s.
AuT-D The non-mystery of gravity part 2
Gravity
is what we experience because of the change from a non-line to the stacked
form. Its expression is in relativity
but its generation is in absolute terms.
That is we experience it as a force, but it arises at a quantum level
where everything happens at once. To
understand this we have to look at it on the quantum level.
It is an equation not a force. Instead, it is the reflection of an event
that causes a non-dimensional line to pick up a F-series extension. The extension is pre-ordained mathematically. The equation looks like state(n)+carrier(n)=comp(n-1+(n-2)).
Compare comp(-n=n-1+(n-2)) where compare generates the
compression state of the positive and negative states. These values of negative and positive spirals
are half the result each, but their relationship giving rise to different compression
states reflect whether gravity is positive or negative.
This non-dimensional equation leads to the dimensional
equation (m1*m2)/d^2 when elements are joined and then separated. This relates to the exponential relationship
of forces of necessity.
Dcomp/dx=dcomp1*dcomp2=mass and compression states of
(state1*state2)/dimensional relationship of 1 to 2*dimensional state of 2 to 1.
The force isn’t something that joins objects, instead it is
what the objects experience in response to this math moving them together or
apart.
It is the equation that stacks and that requires the
solutions be linear, it is the equation that adds one solution to the other. While these can be seen as occupying
different times along a time line, the absence of true time makes them
experienced together, but separated by distance which is, after all,
interchangeably with time according to dx/dt.
X1 is one carrier state behind
X2 and is a quantum measurement. When
two ct1 states have a higher state change relative to them you have dct2a(for
example)/dct1a and the other is dct2b/dct1a, but the change of either relative
to dct1a may be indirect and only a lengthy function of dctz^100.
What we observe as diluted gravity is just that, the
distance between higher states as a reflection of the number of ct1 states
shared and the chain between them.
This feature of non-linearity, all ct1 states existing
without dimension, requires that the gravity in the most distant higher ct
state in the universe is immediately affected by the gravity of any other
higher ct state although the actual effect is offset in sct by the relativistic
consequences of interaction of the higher ct states to the ct1 states.
Sharing of ct1 states implies a change between states the
gravitational imperative of m1*m2/d^2. D
is defined as the presence or absence of shared ct1 states for a given
speed. Speed is important in the
discussion because the change in ct1 states is defined as speed. Vibrational only means that shared ct1 states
are affected by unshared ct1 states moving the fundamental ct4 states at
different rates.
The interface of speed and gravity gives rise to measurable
and predictable force vectors; but the presence or absence of movement and
gravity is the effect of the solution to the algorithm for these higher ct
states to non-dimensional, time (sct) independent ct1 states.
The measurement of gravity should reflect the web in the
fashion shown above for the first event in linearity.
The “attaching” makes sense as gravity since the resulting
force is one of attachment and since gravity reflects the density of the matrix
formed by combining all these non-dimensional solutions which provide a
non-dimensional point that we perceive with all of its elements as the past and
present, the here and there and the gravity of existence.
The universe gets very
complicated in a hurry, but the building of the initial spirals can be
reflected in joining changing ct1 states.
Building an algorithm 16 gravity
and anti-gravity
It's time to talk
about what happens when we stop blowing expansion into the Freeman Balloon
Universe. This is another way of ignoring dimension, which models fairly
well on the actual universe which has none.
There are so many
things based on misconception. For example, the reasons that photons
leave stars but not black holes. The idea that black holes fall into a
singularity is an odd little trap (double entendre) and one which I fell into
originally. Only when I noticed they were moving with the rest of the
universe did I realize the mistake.
So why no photons?
The answer should be the same as the answer as to why waves are absorbed
by matter. They are not trapped by
gravity alone, they are compressed out as radiation or into the higher
compression state. This doesn't stop black holes from attracting light,
however or space.
Time dilation
requires that space be compressed in the proximity to any object with
sufficient mass, even ct4 scale materials which are exponentially less dense
than black holes.
Using the beaded
spiral model, it appears that at the state of ct5 space is forced onto a spiral
into a higher state and the same is true of each succeeding state.
This raises the very real possibility that ct6 exists
around us and cannot be seen except as high gravity areas because it is so
dense that even ct5 (black holes) are compressed onto carriers in its presence.
The idea of miniature black holes making up dark matter is
unlikely even in these high gravity areas; but the idea of ct4 states stacking
to the same net effect seems possible if not likely.
All ct5 states are exponentially denser than ct4 states, it
just works out that way. Anything else would be unstable. It might
be created in a laboratory but as soon as the compression forces were removed
it would break up and it would only be dense ct4, not ct5.
That 13.7 billion year ago start to the universe is utter
nonsense since x wasn't large enough for even ct4 to exist in its present form
at 13.7 billion years from the true start.
At full compression, the
"big bang" suggests and explosion, but there is no explosion,
although some force changes the compression of the universe. What is it?
This is an anti-gravity defined by the fact that an
inflection point has been reached where a net inward movement is suddenly
replaced with a net out ward force.
This transition will continue according to estimates here
for either 16 billion years or 50x14 billion years when the next inflection
point is reached.
What is the anti-gravity? It appears that there is a
net feature to gravity. It currently nets where antigravity is greater
than gravity for the entire universe, although for locations where gravity
predominates, higher ct states, there appears to be localized inflection points
where the opposite effect is experienced. As a matter of fact, the fact
that we see so much gravity indicates that we are approaching an inflection
point sooner rather than later. When that happens, we'll see a net
contracting universe.
It's a slow process, the conversion from expansion to contraction,
and occurs at such a small level it might be hard to see and may be impacted by
the relative stability of the states, that is ct4 states and ct5 states may
change slower than ct1-3 states, that is the net anti-g and g for space,
photonic and wave energy may change more quickly and even more often.
Gravity forms from
the creation of the ct1matrix and possibly around the non-linear state
(0,1,1-the zero) which is a part of the compression process.
The expansive force
is around us, perhaps because it is "outweighed" by gravity it
appears invisible or perhaps in our isolated net compressive state it is
outweighed.
The way this works
is that net inflection points regionally and net inflection points for the
whole universe provide for inflation or deflation, net compression or net
expansion.
As the universe
shifts and the air goes out of the Freeman balloon model, perhaps it will
become predominate in some way and we will see it pushing against and
apparently ever-increasing gravity with just a different spin.
There are other
changes but net expansion to compression seems to be the most likely
reason. It will take at least 16 billion
(the short time frame) years for gravity to change from "net
anti-gravity" to "net gravity" in this model, but regionally it
can change at any time. In this model we
are in a “net compression bubble.”
Net gravity changes slowly. Given the conceptual framework the net
gravity would have changed very little in the 10,000 years of recorded
history. The idea is that in this period
there would have been only 10,000/16 billionth of a change. Regionally, the change may have been even
less.
Records have not been kept, but they can be kept to observe
this net change over time but the process is complicated because localized
inflection points may be longer or shorter according to the spirals that build
local events.
Here the idea is that for ct4 transitions the minimum (not
actual but minimum) size for the smallest (not average but smallest) carrier
has to be for an instantaneous (existing only for a 1.07x10-37th of
a second) ct 4 state must be 10^16*6^8*4^4 relativistic state changes which is
that number times 2^2 quantum instances and matter at the level that we
experience it is presumably a great deal longer on a scale of 10^74 times
longer.
Building an algorithm 15:
compendium
The universe contracts and expands, but not because of
gravity or dark energy which are effects but because the algorithm has average
inflection points which spiral towards but never reaches equality or a balance
of the positive and negative. The
balance is between positive and negative F-series.
The algorithm
allows for stacking of time states because F-series are stackable (1,1,2 is the
same as 11,11,22 is the same as 111,111,222 and so on).
The positive and negative sides of the algorithm are
required to provide for expansion and contraction about a common solution.
As the solution moves away there is expansion, as the solution stays
parallel (not exactly parallel because of offsets) there is near stability, as
the solution moves together there is compression. For each quantum
element carrier solution this model holds, even as the solutions are stacked
leading to an average solution about which the universe expands and contracts.
There is no relative movement at the ct1 level for quantum
change, but it does exist at the interface of each type of information and this
is visually clear 1 to 11 to 111 etc. Relative movement of this type
takes into account (1) that all points must change at the same rate according
to a single variable and (2) space (ct1) or some other state provides a matrix
and the interaction with the matrix gives the appearance of relative change
that is observed because the carriers change only at 90 degree turns or only
when solutions change from compressive to expansive and alignment can occur
along quantum points along these carriers of incredible size.
There is no
separation, if there was, then space could be infinitely divided and would lose
itself as a point of reference. Instead, there is a finite amount of
information from which each quantum moment is defined. This includes an evolving definition of
curvature (pi) which is defined at any value of x.
Fundamentally there
is no separation except between the portions of the equation making up
solutions but this is sufficient separation to allow quantum moments to be
separated. These each change
incrementally but summing together which yields sufficient change to
show/require velocity.
The big bang is an average inflection point, one of a very
large number of those corresponding to intersecting offset spirals which
approach but never achieve equal size or alignment between the positive and
negative element and the convergence and divergence is reflected in pi type infinite
converging series. It is a big bounce, but a defined big bounce without
entropy or randomness.
Building an algorithm-the battle
for Time: chart of evolution (coe)
EHT went down the wrong path in assuming that the universe
arose from a finite amount of information allowing a consistent definition of
pi and that any point in the universe was merely the expression of the
information for a value of time. That analysis fell apart because it
required the universe collapse and that would have required that there be a
fixed point of reference, a beginning for this information which is
contraindicated except by the fantasy of the big bang.
In fact, the big bang, like all other superstitions became
the last bastion of manmade religion. I do not deny god, AuT theory falls
apart dimensionless in G-space just as other theories fall apart at the big
bang or some other place.
It was
then determined, not precisely accurately, that the universe must be an
informational based system because the observed results of NLC (the
predecessor) indicated a universe with exponential construction. While
information played a result in that determination, it was later discovered that
the system was not only information, which would have allowed for Einstein to
be correct, but was based on an algorithm upon which the information was built.
Early
on, the spirals of gravity (which lead to the orbits we observe) forced the
theory to focus on equations that resulted in curved spirals and these can be
seen in the early work. Einstein again bumbled in the way as the fixed
information ideas began to conflict with visions of infinite series brought on
by positive and negative curves. Where did these originate? The
idea of intersecting spirals came very early because they allowed for the
“false-finite” information theory to cycle through positive and negative
spirals.
But a
quantum universe begged for a different result and before long the Fibonacci
series of Leonardo Pisano (the Pisa, Italy native whose father Guglielmo
Bonaccio somehow led to the name Fibonacci) began to attract my
attention. It did not present curves, but it did present two things,
self-generation and a spiralish, quantum result. The curved universe and
the quantum universe waged a quiet war in my mind.
Since pi
had always required a quantum solution if the theory had a quantum
underpinning, curvature could be created by averaging the F-series quantum
results. With fixed information of Einstein, pi had a fixed result which
did not work well. Something was happening in the universe and the
F-series suggested the result. This diverging series defined fixed limits
with intersecting spirals but picking which fixed limits applied was
problematic. I even calculated how much information would be in each
spiral to arrive at our current universe resulting in a very strange
spreadsheet which remains in one of the old posts in part and continued to
bother me. Something was wrong with a fixed information system although
the idea of converging positive and negative time states set out intriguing
results which worked well with the idea of successive big bangs.
X, the
single variable of AuT and Time co-existed for a while then, less than a
year. The problem with time was that it changed differently than x.
The two refused to peacefully co-exist. Problems were particularly
troubling with time dilation which I knew held the secret. I stumbled
across a corollary of the F-series (Fibonacci series) then which provided a
mechanism for “compression” which was that consecutive F-series continued to
function as F series as long as the “base” number system (base 10 for most of
us) was eliminated. Time’s days were numbered, x was rising preeminent in
the battle.
It
only required to come up with a system whereby time was defined by x with all
of time’s sloppy features. The curved model seemed to indicate that we
were only 5 F-series out, but the spreadsheet indicated that there was not
enough information present.
To
reconcile these Time had to be caged and eliminated. To reconcile Einstein’s
error with actual observations, to reconcile quantum states with the appearance
of history, dimension and velocity at quantum moments some new way of looking
at space time was required.
The
F-series supplied an answer; because the F-series defines both points of change
and intervening states which are called respectively carriers and quantum
points for lack of better nomenclature. That will come with time.
It still worked poorly, but the relativistic features were all there. It
had already been established based on the information based building of time
states. The first book determined that exponential information states had
something to do with the universe although it was only much later that the
F-series type stacking was seen to yield the more accurate results which were
clouded by the coincidence of base 10 number and the 2,3,5 ct4 F-series.
Reconciling the observed compression of black holes with the mathematical
results cleared this up with finality F-series for 5 (3,5,8) yielded 16^2^5 or
16^16 which is the observed minimum size for a black hole.
Space,
time and velocity had finally found their place. They were merely the
relativistic effects of stacking one ct state on top of another. At
nearly the same time it had been determined that movement had to be the
substitution of one ct1 state for another within the ct2 matrix (dx/dt dictates
that movement is a change in space) so all the features were already there, and
it was easy to find real world models of beaded strings yielding 3 dimensional
weaves. In fact, I used to deal in 3-dimensional bead weaves, an odd
coincidence. Of course, illusory coincidence is built into this pure
system.
And
there it was, everything that every physicist had clung to fell apart.
Space time disappeared. The universe has no dimension. Perceived
dimension is nothing but relativistic change of ct2 to ct1 and the succeeding
relativistic changes that result from F-series stacking. Likewise space
curvature is only the averaging of results which explains why at high gravity
areas it appears to curve more.
The
theory is finished except for minor details, it appears that all things to some
extent are finished.
The incomprehensibly long
carriers of ct3 that give rise to ct4 have foreseeable but unusual effects:
1)
Matter exists in stable forms over very long periods of time. With (and possibly without) stable
substitution of ct1 states (matter not decomposing to energy because shared ct1
states keep it from dissipating), a given ct4 state may exceed the 40 billion
year cycle of the current universe, i.e. even at one change per 1.07x10-37th
of second, some of these carriers may (individually or through substitution of
common ct1 states between matter) be so long that there are 40billion years
worth of seconds before the next “net” turn in the ct3 carriers forming the4;
2)
Matter forms large compressive cycles sharing large successive ct1
states so that it appears to move through space through a sharing of ct1 states
3)
Ct1 state substitution (sharing) is so immediate that (a) states
are shared to give rise to vibration between the same ct4 states to give the
appearance of solidity and (b) common movement through space.
4)
This is in contrast to waves (and more so with photons) where
sharing is in two and one dimension respectively giving rise to the appearance
of movement in one direction at the maximum substitution rate of 1:256 for
waves and all direction for photons except that both waves and photons, in the
presences of ct4 experience such compressed sharing of ct1 states that the
bending of apparent space exists changing the sharing rate to stop absorbed ct3
and 2 states in a manner where “absorption” equals a longer term sharing of
common states to cause a common movement through space while not requiring a
change in substitution rate; a phenomena that probably occurs (absorption) when
matter, waves or light are in the presence of black holes
5)
All forms of clock time experience an end where they break into
lower ct states and this is experienced with stars when they explode and other
examples of state transformation on a smaller level. Presumably unstable black holes would exist
for the same reason unless a cycle of compression comes to exist where they
continually recycle shared ct1 states to prevent a net localized change.
Each of these phenomena should
be discussed separately to get a better understanding of the universe.
Aut: 14 continued-Very Long
Carriers
1)
Premise 1 from 14: Matter exists in stable forms over very long
periods of time. With (and possibly
without) stable substitution of ct1 states (matter not decomposing to energy
because shared ct1 states keep it from dissipating), a given ct4 state may
exceed the 40 billion year cycle of the current universe, i.e. even at one
change per 1.07x10-37th of second, some of these carriers may
(individually or through substitution of common ct1 states between matter) be
so long that there are 40billion years worth of seconds before the next “net”
turn in the ct3 carriers forming the4.
The maximum length of a ct1
state is exponentially longer than the value of x because of stacking the prior
two states to form the current state allowing for lengths exponentially higher
than the age of the universe.
Aut: 14 continued-Large
Compressive cycles
2. Matter forms large compressive
cycles sharing large successive ct1 states so it appears to move through space
through a sharing of ct1 states.
Concentrated changes
(adjacent, shared, vibrational ct1 changes) in linearity (gravity) is possible
from the ct(4) state over a given set of ct1 states (a location as we view it)
over a certain number of changes in x (reflected as a concentrated amount of
time). That is as more ct1 space is squeezed out by the presence of
multiple ct4 states exchanging ct1 states between them, we perceive this as
more gravity in a given location. At a
set velocity there is more time dilation because the more concentrated the ct4,
the more ct1 states must be shared during any change in x. That is the vibrational effects must increase
or the speed (different ct1 states being exchanged) must increase.
In this way, speed and gravity
effects of ct1 solutions in proximity to ct4 affect it in a proportional
fashion. The effect in any single F(x)
solution and results from the change between F(x) solutions based on the shared
interface of ct1 and ct4 in adjacent solutions with the same ct1 states
increasing concentration and different ct1 states moving through all the ct4
states creating velocity. For this
reason, velocity can be increased relatively between two ct4 states by
increasing the number of ct1 states that are not shared. While this is just another way of discussing
Newtonian Physics with the exact same results, one reflects the reality and the
other an illusion
Waves and photons are
sufficiently spread out so that the amount of new ct1 is not crowded out. The amount of new ct1 contacting photons
(ct2) is constant since none is crowded out by a compression solution giving
them a constant maximum speed relative to ct1.
The same effect is present in ct3 from our vantage point, but from the
vantage point of ct3, it is slowed relative to one dimension from ct2 which is
why a fixed line (wavelength) is generated.
Presumably, this “light speed” would also be a feature of ct4 as seen
from ct5 with the exception that there would be two “lines” or wave lengths
seen as linear (or maybe as a plane) from ct5 which would see itself as moving
slower than ct4,3 and 2 which would all seem to be moving at light speed but
with these trailing lengths or planes.
Where does the wave quality
come from? It appears clear that the
same inflection point changes (ct1 substituting for ct2) that generates
transitions along the carriers making up the net inflection points for the
universe on average exist in wave, matter and other higher forms so that all
would have sin type wave functions based on the substitution inflection
points. The more perfect 1 of 256
substitution of ct1 for ct2 appears more linear to us, the ct2 for ct3 appears
as a wave, the substitution of ct3 for ct4 appears to be a wavelike plane and
presumably from ct6, ct5 would have a three dimensional wave feature.
Aut-defining Pi
a) Curvature changes from one
ct state to the next. The preferred formula for pi in AuT is:
Pi=n+(from 2 to max x)N/F(pix)
f(pix)=[(-1)^x]+[2x(-1)^x-1]
note this is a type of the f-series (x and x-1); it is also a
reflection of following out along positive and negative spirals outward from a
central point.
Pi is considered to be a function of separation as well as the
overall value of x, but is driven by the primary equation.
N changes according to the amount of coordinate compression for
the given value of N.
b) The relevance of the order
of solution to defining location is fixed by the mechanism of stacking the two
prior universes to get the next universe.
c) The closer two solutions
occur together the closer the proximity; diverging and converging spirals
ensures separate points may come together. The sharing of ct1 states in
particular ensures higher CT states can remain together for very long
periods. The longest carriers are multiples of the life of any
intermediary period between big bangs because of exponential stacking in
F-series solutions to make carriers.
d) This separation is important
because the solution for any intersection of any two spirals
of pi for that intersection involves both the separation
(based on the order in which the solutions are solved) and amount of
information possible based on state of the proximate information in
question. The basic equation for gravity m1*m2/r^2 averaged over the
entire universe is relevant to the inquiry but it is solved with a Lorentz
equation so that distant solutions can have a disproportionate effect.
Because of the density (16^32) of information in black holes
two black holes separated by galactic distances might have a greater effect on
one another than the matter in between just as two bodies of matter have a
greater effect on one another notwithstanding the wave, photonic and space
effects between them.
e) The equations
defining pi for different ct spiral states are suggested by the size
and direction of the phase shift related to coordinate change. This could
result in a reversal of the calculation process which is not considered likely
but is worth setting out:
Ct1 -1+1/3-1/5+1/7….
Ct2 2-2/3+2/5-2/7…
Ct3 -3+3/3-3/5+3/7…
Ct4 4-4/3+4/5-4/7…
Ct5 -5+5/3-5/5+5/7
While pi builds in this fashion, Pi=n+(from 2 to max
x)N/F(pix), when measuring it, a backwards look might make more sense
given the strength of the more highly compressed ct5 state.
Perhaps the most important part of this, covered in more detail in
book 2, is that the electromagnetic force seems to result from ct2
being exposed to pi from ct4; the weak nuclear force (slowed emf)
is caused from having ct2 exposed to pi from ct5; and the strong
nuclear force results from ct3 being exposed to pi from ct5. The
exact mechanism for this is the variation in the geo
function for solutions where a certain concentration achieves a critical
concentration inflection point.
End 5/19/17
probably in book 2
Pi changes from one
ct state to the next, at least in theory.
The
relevance of the order of solution to defining location is fixed by the
mechanism of stacking the two prior universes to get the next universe. While as a general rule the closer two
solutions occur together the closer the proximity, diverging and converging
spirals ensures separation ultimately of points, while long common carriers and
the sharing of ct1 states in particular ensures they can remain together for
very long periods of the, the longest carriers being multiples of the life of
any intermediary period between big bangs because of exponential stacking in
F-series solutions.
This
separation is important because the solution to pi involves both the separation
and amount of information as well as the state of the information in
question. The basic equation for gravity
m1*m2/r^2 averaged over the entire universe is relevant to the inquiry but it
is solved with a Lorentz equation so that distant solutions can have a
disproportionate effect. Because of the
density (16^32) of information in black holes two black holes separated by
galactic distances might have a greater effect on one another than the matter in
between just as two bodies of matter have a greater effect on one another
notwithstanding the wave, photonic and space effects between them.
The equations are
expected to look much like this although the size and direction of the phase
shift may be different and may represent the source of offset spiral:
Ct1
-1-1/3+1/5-1/7….
Ct2 2+2/3-2/5+2/7…
Ct3 -3-3/3+3/5-3/7…
Ct4 4-4/3+4/5-4/7…
Ct5 -5-5/3+5/5-5/7
As can be seen
these definitions form they own versions of converging series for odd and even
results.
The resulting
curvature at any value of x will depend on the proximity of ct states which is
the solution to when they are solved relative to one another at any particular
point as well as the type of ct state. As was mentioned the concentrations are
such that space, photons and waves even in very high concentrations have less
effect than matter and matter would be similarly less relevant to curvature
near black holes.
Space
has no relative separation so the curvature of space may be a constant and
directly related to the total amount of information in the universe.
Location
is a function of both the timing of the solution and where the result is after
curvature is applied sequentially for each quantum point based on where the
solutions were and where they will be over time. Hence the velocity (ct1 substitution) will
also affect the final result.
Pi
is effected no matter how great the separation of points based on this
location, but the result is very small at great distances.
Quantum
movement is from one fixed point to another only in one direction for all
points. This ends up being from the singularity but varies for each point in
terms of how far the solution sets the point.
Changes that result when two masses combine
are in a limited range. In such a system, the total amount of change does
not vary. It may express itself in different formats, primarily, if not
exclusively mass to energy or mass to black hole depending on the level of
concentration which is to say the level of coordinated change.
Rate of change is one. In relativity,
the interpretation of this data is that the mass has "increased" to
the point where there is not enough energy to provide additional acceleration
at the speed of light.
Two
coordinates at the same solution point have a more difficult time than one
interacting with sequential space or ct1 states at the same time.
Obviously this gets exponentially harder the more states changing
together at a time, there is no room for the common interaction with adjoining
ct1 states unless the higher state begins to break up. If there is
greater concentration as a result of gravity, more ct1 states are compressed
into the same solution area so the relative change increases. It is a
proximity solution, for this reason all other factors begin the same a ct1
state will change faster than a ct4 state.
All
change is in one direction. We work within a middle range where we can,
as a result of the physics which controls our actions, slow down or speed up,
relative to secondary spirals/coordinates, the number of CT1 states contacted
between quantum states within this narrow range.
Relativity
envisions a fixed space-time; a fabric. AuT proves that time and space
are illusory. They are manifestations of a fixed system which does not
change except as a solution to a single variable in an algorithm that has no
time and no dimension. The singularity is the only "fabric" and
it exists in a non-linear format. Our universe is nothing more than the
illusory expression of this singularity and saying that our universe is a
fabric is like saying the movie playing on the screen is the same as the
background where the actors are filmed to create the movie. The later has
actual fabric, the former does not.
Under
NLT, every moment, in its entirety exists forever.
In AuT every quantum moment includes the entirety of “prior” time
and space just as for every point the algorithm contains its past in its
entirety depending on which x is being solved for in any quantum moment.
The order of the solution is not even relevant.
Gravity has nothing to do with the change in space time “fabric”
since no fabric exists. Instead Gravity is the force “generated” (the
force perceived) when time is viewed as linear in NLC for each coordinate in a
universe made up of a very large, but not infinite, number of coordinates.
Certain solutions to the algorithm generate anti-gravity for the quantum
bit of information. In AuT, the total amount of information increases
exponentially. Other solutions apparently generate neither gravity nor
anti-gravity and these are the most common solutions.
The model of AuT for the display of the
universe is decompressive spiral solutions followed by compressive spirals and
forming each successive quantum state formed from three universes at a time
F(n)=F(n), n-1,n-2 yielding 3 dimensional effect..
Time may result from CT5 even though matter
is CT4, because with CT5 a point of reference created. In relativity,
space time is changed (bent) by gravity. In NLC the amount of gravity
(positive or negative) represents the amount of linearity in a system.
In NLC, linearity
is illusory, gravity can be seen much like the effect of lighting a slide.
The slide itself hasn’t changed. One can see this quality of intersecting
spirals where spirals overlap. This
occurs both in aligned and misaligned F-series spirals and the drawings show
both.
In AuT the slides change in successive sequentially larger
compressed combinations of systems where each system changes internally as x
increases.
This raises the question of why gravity from one body effects
forces and other gravitational bodies. The further apart the solution, fewer of
the spirals overlap. The closer in proximity, the more they overlap. If
gravity is a function of linearity alone, linearity between bodies is greater
with proximity. Speed suggests that the amount of CT1 contacted effect its
speed. In exponentially higher time states, black holes, the amount of
overlap is greater and since all other time states around them are also
compressed the increase extends over a wider area and the interactions are
greater.
The solution to the algorithm compresses temporarily and with
greater stability clock time states. The big bang represents a maximum
average compression and an inflection point for a given period. This
results in pockets of compression amid a general decompression of the universe.
While compression and decompression occur throughout the model,
the reasons for concentrations within the general state of decompression;
concentrations of matter represent places within the model where more
compression is present than otherwise, and the display is consistent in this
regard.
The
quantum movement of the universe looks like a very complicated spider web of
concentrating states connected by concentrating spirals amid a general
expansion otherwise post big bang with the spider web of spirals concentrating
in the temporarily compressed states with varying degrees of stability based on
the exponential compression as a result.
There is more “anti-gravity” the state being net decompressed, but
the decompression algorithms are not properly mapped because of a scientific
prejudice in favor of compression and because being within a block of stable
compression states, net compressive spiral movement, we are not directly
affected by the forces outside of our compression “area.” We exist where
gravity, spiral solutions predominate but in most of the universe anti-gravity
spiral solutions away from the point of intersection, predominate.
When the universe
"creates" linearity, a quantum of gravity exists for each set of
quantum information. The information itself can be broken into different
coordinates which have a fixed sum rate of change.
As has been said, all change is equal and in one direction.
We work within a middle range of spirals where we can, as a result of the
physics which controls our actions, slow down or speed up relative to a group
of primary CT1 spirals within this narrow range. To give the
impression of Standard Clock Time (SCT) and to make changes in direction by
slowing down change in one direction relative to another with the relative
position being a higher spiral to CT1.
Equation solving
involves elimination of time in equations and the consequent revelation that
dimension is a function of the linearity of time. This is the not quite correct
presumption, by Einstein, that everything can happen at once which necessarily
means that everything happens without the separation of time and dimension.
Solving to eliminate time, then, is also solving to eliminate dimension.
No comments:
Post a Comment