Pages

Wednesday, April 27, 2016

The AuT capacitor-part V: The hard part, part 3-The Universe in a nutshell

Part III

The Universe in a Nutshell (to date).  This “brief summary”, really only the part through item, say 15, has to be taken in context with the overall work.  The idea behind an Algorithm Based Universe (AuT-Algorithm Universe Theory) is unsupportable without a great deal of background as to why that theory is attractive as a theory of physics.

1        Linear vs Curves, the compromise: A curve works better for a big bang type result.  The linear F-series works better to define the period of overlap and the resulting amount of compression/capacitance and decompression/discharge.  The gradual change between multiple universe may yield a curve so the combination of the two provides a mechanism to suit giving linear solutions a curved appearance, but only to the point where pi is no longer solved.  In this way the total information solution to pi is preserved based on averaging the results of linear spirals together to even out the answers..
2        At inflection points compression changes to discharge, decompression.  Compression is according to a scale formula formula of the type: x =Sum (n-(n-2) or n-3,2,1)^2^n or the equation compression number=(f series (sum(n,n-1,n-2))^2^n
3        The universes build up to the point where compression of higher states can occur but each spiral remains in place turning, compressing and decompressing at different levels and times throughout the process.  At certain points (intersections) overlap of the spirals is sufficient so that the next higher state of concentration occurs according to the state of compression.  It remains likely that there are multiple big bangs for each compression state (ct state) in order to arrive at sufficient compression to arrive at the next higher state.  Hence the equation for any one universe need not depend on what is happening at the state, but is merely a function of the number of spirals building and the amount of overlap and the resulting compression.
4        Our place in the universe appears to be 3-12 billion years after the end of the overlap and therefore during the decompression stage, but it remains possible, if not required by the formation algorithm that there are overlapping compression and decompressing states throughout the universe whether during a main compression or main decompression stage of the primary spiral or spirals leading to a big-bang type event.
5        U=n! where u is the number of quantum points of information in our (each) universe at any point made up of n! spirals from a building model where each spiral adds a spiral according to the F-series.  The size of our universe at any point in time in terms of quantum units is equal to the number of spirals that make it up.  This suggests that many spiral universes almost as big, with almost as much information, as ours exist but that each is defined by only two dimensions.
6        The universes are defined by the linear-F-series spiral sin function which can be solved as x goes from zero to infinity but for smaller spirals presumably stopping at a fixed number of spirals from 1 to the highest number for the universe under regard based on the maximum value of x being solved for in the algorithm.  A distinct coordinate is solved for each value of x for each LFSSSF which sequentially (as x changes sequentially) define a quantum point in our universe at any quantum moment.  The LFSSSF being: xI(from 0 to T)(gsin(pi/2x)x(FseriesFunction)^2^x)+(-gsin(pi2x(xFseriesFunction)dx where the FseriesFunction is equal to sum(n-2,n)n at least for CT3-4 and ct4-5.
7        The solution changes at overlaps so that any solution must solve for both the positive and negative spiral and at the point where the two intersect you have to define compression and a negative entropy.  Afterwards you have to define decompression and entropy.  This means that to the equation in the preceding item you have to add the “information capacitance” and “information discharge” equations into the algorithm of LFSSSF depending on whether there is intersection or not at any quantum point.
8        Beginning with x=0 to the total information in our universe (compared to the next levels of universes)
9        Compression at a point where the prior universe occurs together to allow compression which compression may include all sub-set states, but only achieves compression of the next higher state when there is sufficient concentration of the next lower state.
10   Equation for compression is a maximum of 3% based on the amount of overlap of other states and based on an equation related to the total amount of the next lower state. This 3% will steadily dissipate to a lower percentage after compression.
11   Beginning at x=0 and ending at the total amount of information in any universe compression is at a point where the prior universes align states to allow compression of the next higher state with succeeding states requiring exponentially more information and other lower states of compression also occur during the points of overlap
12   The likely equation for overlap deriving from the voltage capacitance equation is of the type:
13   V=I(tot(ctx))(1-e^t/rc) where rc represents a constant (as opposed to resistence and capacitance and would be related to the total amount of information available for compression); V would represent the amount of compression within the parameters set and I(tot) is the total amount of information and where there is such an equation for each type of information in the prior state so in a ct5 universe x would change between 1 and 5) so that the total amount of compression is defined by RC and where T is a length along the intersecting spirals being some compressed
14   Discharge and decompression would similarly be defined by V=I(tot(ctx))*V(x)/RC)*e^-t/rc where v(x) is based on the amount of compression before the overlap ends and t is the length of the spiral after the overlap.
15   In order to get to a universe with the complexity of ours build on solutions to twin intersecting linear spirals which are out of sync the building process (F-Series) for spirals should be the same as the spirals themselves.  Adjacent spirals would give adjacent solutions so it seems probable that the building of the spirals occurs at the same rate as the propogation of the spirals.
16   There are several ways to approach this, but the one that seems best suited to observation is an exponential building as opposed to a pure F-series construction and that this building world occur with each spiral quantum change, that is to say, with each change in x.
17   Using F as the designation of a spiral, it would look something like this:
x-1   F1
x-2   F1', F2 where the ' stands for the first change in solution.  F2 is being solved for x-2 but it is still being solved there for its beginning equation, i.e. F1 is rotated one unit relative to F2.  This can be accomplished by having the equation for f2 use x-n where n is 1 less than x.  This same type of change would follow for other spirals as shown by the next change
x-3  F1'', F3, F2', F4:  Here the exponential change is the same as the exponential building from the intersection of spirals shown in the drawings below.  In this case F3 springs from F1 and F4 from F2.  The solution for x in F3 and F4 would be x-n' where n' would be 2 less than x.
x-4 would result in F1''',F5,F3',F6,F2'',F7,F4',F8 with 5,6,7 and 8 having a n'' predictably 3 less than x.
18   This fairly simple model would allow for incredible complexity even at this level with the spirals interacting and offset.  As you get to the higher concentration states of each spiral the changes would become more radical.
19   Compression to photonic and wave energy might occur relatively early this this model, but given the need for proximate solution this result might not be allowed to occur as a stable state.
20   One reason for the lack of compression would come from early state changes adjacent to later state changes.
21   The point where the spirals began to turn would be almost immediate, but something should be present to generate the incredibly long spirals arms or lengths before turns that we experience, i.e. spirals that go through billion year compressions and billion year re-expansions/entropy periods.
22   Since there is no need to have a solution with a fixed amount of data for the algorithm, but there is a need for a fixed amount of information for any universe, it appears that the largest spiral has a solution that is defined by the lesser spirals which make it up but without the effects of the larger spirals of which it would be a part.
23   The size of the spirals in this model would not be limited and perhaps the easiest way to grow them would be to extend them by quantum length with each rotation.
24   In this way you would have the following result in terms of length:
0,1,1,2,3,5
1,2,2,3(4),4(6),6(10) etc with the parenthetical number being an alternative result.
25   In this way the length of the spirals grows exponentially for each spiral.
26   The F2 spiral would appear as follows relative to the F1 spiral in terms of length after this first period:
F1 1,2,2,4,6,10, etc
F2 0,1,1,2,3,5, etc
27   This growth patern in terms of the length would continue until there was sufficient concentration for a state change at which point the first 90 degree turn (based on sin(pi/2) would occur.
28   The exact method of building is uncertain, but this could, conceivably result in extremely long spirals before even the earliest turn began notwithstanding the fact that turns would necesssarily have to occur for exponential compression using capacitance type interactions.
29   While one thought might be that this would take a long time, that is not relevant since the solutions to any spiral equation are instantaneous.  The size of the spiral arms should be defined by the number of spirals down to full concentration or the amount of information in any spiral arm.  For example, if we knew that we were 10% of the way towards decompression, we'd know the length of the post compression arm and could figure out the total length, divide that by 10^-47th and have some idea of the total amount of information in the universe.
30   Likewise using gravity to figure out the total amount of information in the universe using a quantum value of gravity, we could figure out how far along we were on a given arm given the amount of information in each state.

These rough equations for the basis for a universe of the type we experience. Given the long term effect of even a small change in any spiral in this set up, the complexity rises to the equivalent of what we experience and it can be assumed that as I(tot) increase for all clock time states, that is as the next universe after ours is solved for the algorithm, the universe could change substantially from ours in everything other that the underlying method in which it is built/solved.



Monday, April 25, 2016

modeling AuT for sufficient information

In order to get to a universe with the complexity of ours build on solutions to twin intersecting linear spirals which are out of sync the building process (F-Series) for spirals should be the same as the spirals themselves.  Adjacent spirals would give adjacent solutions so it seems probable that the building of the spirals occurs at the same rate as the propogation of the spirals.
There are several ways to approach this, but the one that seems best suited to observation is an exponential building as opposed to a pure F-series construction and that this building world occur with each spiral quantum change, that is to say, with each change in x.
Using F as the designation of a spiral, it would look something like this:
x-1   F1
x-2   F1', F2 where the ' stands for the first change in solution.  F2 is being solved for x-2 but it is still being solved there for its beginning equation, i.e. F1 is rotated one unit relative to F2.  This can be accomplished by having the equation for f2 use x-n where n is 1 less than x.  This same type of change would follow for other spirals as shown by the next change
x-3  F1'', F3, F2', F4:  Here the exponential change is the same as the exponential building from the intersection of spirals shown in the drawings below.  In this case F3 springs from F1 and F4 from F2.  The solution for x in F3 and F4 would be x-n' where n' would be 2 less than x.
x-4 would result in F1''',F5,F3',F6,F2'',F7,F4',F8 with 5,6,7 and 8 having a n'' predictably 3 less than x.
This fairly simple model would allow for incredible complexity even at this level with the spirals interacting and offset.  As you get to the higher concentration states of each spiral the changes would become more radical.
Compression to photonic and wave energy might occur relatively early this this model, but given the need for proximate solution this result might not be allowed to occur as a stable state.
One reason for the lack of compression would come from early state changes adjacent to later state changes.
The point where the spirals began to turn would be almost immediate, but something should be present to generate the incredibly long spirals arms or lengths before turns that we experience, i.e. spirals that go through billion year compressions and billion year re-expansions/entropy periods.
Since there is no need to have a solution with a fixed amount of data for the algorithm, but there is a need for a fixed amount of information for any universe, it appears that the largest spiral has a solution that is defined by the lesser spirals which make it up but without the effects of the larger spirals of which it would be a part.
The size of the spirals in this model would not be limited and perhaps the easiest way to grow them would be to extend them by quantum length with each rotation.
In this way you would have the following result in terms of length:
0,1,1,2,3,5
1,2,2,3(4),4(6),6(10) etc with the parenthetical number being an alternative result.
In this way the length of the spirals grows exponentially for each spiral.
The F2 spiral would appear as follows relative to the F1 spiral in terms of length after this first period:
F1 1,2,2,4,6,10, etc
F2 0,1,1,2,3,5, etc
This growth patern in terms of the length would continue until there was sufficient concentration for a state change at which point the first 90 degree turn (based on sin(pi/2) would occur.
The exact method of building is uncertain, but this could, conceivably result in extremely long spirals before even the earliest turn began notwithstanding the fact that turns would necesssarily have to occur for exponential compression using capacitance type interactions.
While one thought might be that this would take a long time, that is not relevant since the solutions to any spiral equation are instantaneous.  The size of the spiral arms should be defined by the number of spirals down to full concentration or the amount of information in any spiral arm.  For example, if we knew that we were 10% of the way towards decompression, we'd know the length of the post compression arm and could figure out the total length, divide that by 10^-47th and have some idea of the total amount of information in the universe.
Likewise using gravity to figure out the total amount of information in the universe using a quantum value of gravity, we could figure out how far along we were on a given arm given the amount of information in each state.

Sunday, April 24, 2016

The AuT capacitor

I didn't do it on purpose.
You're going to think I did
but I didn't
Yes I looked for a model
and I found a model
and the model
involved spirals
and a capacitor
but it's not a flux capacitor
It was just a coincidence
like everyone deciding
to walk backwards
at the same time at once
like two people
finding each other
out of everyone else
in the whole world
at the right time
and the right place
there is no flux capacitor
but that doesn't mean
that there is no reason
not to have faith
there are too many
coincidences
in heavan and hell
the one that we share
for there not to be
something
which is much greater
something
which for all the logic
and all the science
heartache and sadness
and despite all the futility
something calls for
a leap of faith.

The AuT capacitor-part V: The hard part, part 2

Part II
          Before getting to the section that I call, the Universe in a nutshell, it is important to note that the concepts behind an algorithm driven universe are the only ones that matter.  Getting specific with the model at this point in time is not pointless, but the modelling will require comparing theory to observation.
          There are key elements to the models:
1)     The models are based on the concept that an algorithm must self generate to produce enough information and complexity to define a universe as diverse as ours.  This implies a surprising simplicity as well as a combination of simple states to arrive at something supremely complex, indeed infinitely complex in the infinite universe spirals envisioned.  This does not rule out intelligent design, it merely states that from our perspective the most likely way for a universe to arise is from some spontaneous event.  The very existence of some state that precedes linearity (dimension and apparent time) and the ability of that state (g-space) to serve as a chalkboard for the algorithm is the mathematical equivalent of a god saying “let there be light” because the ability to support the algorithm is creationistic and god-like even if it is a god without intelligence as we envision it.  Indeed any being that occupied g-space would have to think in a non-linear way which would defy our intellect.
2)     The models must reflect what we observe and must drive physics as we observe them.  The algorithms must define all the states of clock time that we observe (space, photonic and wave energy, matter, black hole material) and their interactions as we observe them and as they are predicted to have functioned in the past.
3)     I take shortcuts and make assumptions that do not necessarily hold even on a superficial level to make a point.  The idea of defining what happens at intersecting spirals using a capacitance or Bernoulli/wave type analysis only because a similar model can be found somewhere in our observation of the universe is a good example.  We predict there is build up and discharge because we observe it and this is a model of compression and discharge. That the math might work well or might not, it is one of many attempts to smooth out a very rough conceptual model as opposed to a smooth mathematical model. 
4)     The gradual change between multiple linear universes envisioned in AuT as opposed to the single change of a single linear spiral, might yield, as they are totaled, a curved solution which works better for some observations than others.  It is another way of “smoothing out” what would otherwise be a rough model.  For example, at the "big bang" and at transitions between inflection points, our universe may not have to change as abruptly (and tragically) if the transitions overall define curves due to the stacked universes that are combined in order to obtain a solution.  In the linear to “smoothed out” curved model, we’d have a gradual transition but there would still be a single inflection point where the universe would go from expanding to contracting with little expansions and contractions within the sum total of space becoming compressed again.
5)     Choices are made, often at random.  For example, I have decided to build on smaller to larger spirals instead of spirals which all change in the same way. In this way you’d have a spiral with one overlap, one with two, one with 3 and then you’d have these of different amounts of information, two bits, three bits, six bits a dollar.  The idea that we have our universe spiral with over 10^100 bits and other universes that might have less than 10 bits in the entire spiral is a way of designing a changing model only.  There is no reason why at any solution to x each of the universes is not identically the same length.  In such a case, the only thing that would change is where the solution is occurring along the length of each.  Deciding that a single x solves for all the universes giving rise to solutions “at a point” is also a choice.  Solving these algorithms different ways or even having evolutions in the algorithm(s) remains as likely as not.  Volume II will have to deal with changes in the algorithms.
Capacitance and compression
When we look at capacitance, we are only looking for a mathematical model and not electrical capacitance.   So we're going to go a little further into this capacitor theory of spiral intersection.  What we're doing here is adding to the algorithm that defines the intersection to include what happens at the intersection in certain circumstances.  We're also going to discuss why this happens the way it does to some extent and the models in three dimensional space that define 'inflection points' where the intersection yields to non-intersection which are not necessarily the equivalent of the very different places where the sin function changes the direction of the algorithm.

We have defined lengths of these overlaps and these defined lengths will give us a definition of "time" for the capacitor equations between t=0 and t=end of the overlap but we can solve for any quantum moment thereby allowing us to predict the future or the past at any point.

To understand this, we have the equivalent of time which is the length along the point of intersection.  This generates a stacking of spirals which we're going to compare to current over a capacitor. Don't mistake this for actual current because we're not going to have the same effect.  What differences?
Well for one thing, we're going to achieve stable higher states, not necessarily in every spiral intersection, but in the big bangs we're going to have the next higher time state as well as some significant discharge.

So let's take this analogy down the road a little ways and bear with me, because we're going to be do lots and lots of substitutions to get from the real world power source, to the algorithm power source of AuT.

Charging circuit-V/t is a time (here length) equation starts charging quickly that slows down gradually until it never quite reaches the charge of the power source.  This type of equation is a good hint that it's driven by the quantum type features that otherwise define the universe.

V(t)=E(1-e^-t/RC) where E is the maximum voltage.  If you replace voltage with the amount of the prior information state, This type of formula could reflect how the intersection of two spirals look coming together.
When the intersection stops you get a discharge circuit which looks like:
I vs t; current starts out at the E/Re^-t/RC)

To understand the obvious differences between the models, it is helpful to look at the capacitor equations as capacitors.

E is the power source.  t is replaced with length and since we're solving for points we can pick one anywhere along this equation and solve it for a quantum moment.
RC is the time constant and is approximately .63 for the charging and .37 for the discharging circuit.
when t=rc then e^-1 is 1/e.  e=(1+1/n)n as n approached infinity (in our case the maximum information in the universe.
Where R is the Resistence and C is the capacitance.  We're not worried about actual electricity, we're only interested in the "building" of this type capacity so while we use Voltage, we're actually collecting spirals and they will, at some point in time (when you get to the non-overlap point [where overlap ends and which is an inflection point in the equation where charging of spirals leads to the dicharge of spirals EXCEPT where they have become stable due to the equation for stability which is essenitally F(n-2,n-1,n-adding those)^2^n for those accumulations of spirals where stability is attained.
So you derive the charging as V(t)=E(1-e^-t/RC) and I(0)=E/R(e^-t/RC) and discharging is
V(t)=QRCe^-t/RC and I(t)=Q/RCe^-t/RC but you don't fully discharge on one end and you discharge more than you would on the other end. At the ct3-4 interface, this 'discharge' to the extent there is a discharge, is our friend e=mc^2 but, of course, we have the other constant previously derived for ct4-5 where the factor (of course being F(n-2,n-1,n) is M(matter)=BH(blackhole stuff)x(times)q13^2^5 where q is a constant that makes up for the difference between the constant for the speed of light and this new constant
Q=EC(1-e^-t/RC)
The capacitor begins to push back against the power source when it begins to charge.

We're going to need some information in place of V or Q.  According to the exponential building of exponential amounts of information.

I=sum(from n=0 to n=infinity (or the total information in the universe)n(factorial) or sum n!=1+1+2+1*2*3+ etc to n

The next post will include:The Universe in a Nutshell

1 

Saturday, April 23, 2016

The AuT capacitor-part IV: The hard part, part 1

AuT-Developing a model for the intersection of spiral using concepts of compression and capacitance
In investigating the concept of capacitance and compression to examine concepts related to the intersection of spirals, two models were examined from o-space.  The one that I hoped might provide more guidance was the magnetism model, the capacitance model seems to work better, but both will be discussed in some detail.
It is important in looking at these models, to understand that electricity has no place in quantum mechanics.  Just as the spiral of gravity led in a very indirect route to intersecting linear spirals as a quantum model for discussion purposes, so also does an examination of what happens during spiral intersection leads to looking for a real time model for an algorithm that will work for discussion purposes.
When I say “for discussion purposes” what I mean is that none of these is an answer.  We’re merely looking for some foundation for the derivation of a single algorithm with a single variable that can explain a very complex and diverse universe which we experience.
A big problem with all of these models is that most of them solve for the wrong variable.  The F-series linear spiral model doesn’t really solve for anything, it is a pure algorithm converting a variable, x, into a location along the spiral.  Intersecting F-series linear spirals were necessary in order to have compression and interjected a problem because at the point of intersection there was no clear explanation as to what happened.
There were several ways to approach this intersection and a few of them were examined.  The wave formation/Bernoulli model is nice because it allows for accumulations of spirals of the type that could lead to stable quantum states.  The problem with that model is that it doesn’t work well with observed phenomena.
Magnetism has an intriguing model, but it is only covered superficially because it doesn’t seem to work as well with observed phenomena. This is not to say, however, that those models of “graphing information” have no place.  Wave formation show stacking spirals and magnetism shows the conversion of one type of energy (electricity) into another (magnetism) and back again which is a model that works well with stable and non-stable CT states.
Friction is a pretty complicated phenomenon that involves the accumulation of a lot of tiny forces at the microscopic level, which result from effects at the numerous contact points between two objects and it also has some relevance in looking for an acceptable model.
What we also need to look for in these models are inflection points, points where the answer to the algorithm shifts.  There are built in inflection points (where the intersections of the spirals being and end) but what happens within the spiral should have a different set of inflection points where, for very small amounts of what we’ll call time capacitance (TC) it becomes stable. TC becomes stable where the exponential equation is satisfied and for each intersection this is related to the last highest and next highest CT state, e=mc^2 for the ct3-ct4 boundary, an equation felt to be quantitatively 16^2^16 for the ct4-ct5 boundary.  This does not have to rule out lower stable compression formation at each state, for example at the ct4-ct5 boundary you can also have the lower e=mc^2 type of compression occur and observation almost requires this to be the case if you use the capacitance/compression model.
The model of capacitance is a function of a single curve generated from an analysis of the change in time relative to the resister and capacitor.  The solution to this analysis is usually graphed using either Voltage (over time) or the Current (discharge) (over time):
V(t)=E(1-e^(t/RC) and I(t)=E/Re^(-t/RC)
The beauty of these two parts of the model is that they left provides for a steady (actually it begins low and slowly slows as it builds towards, but never reaches a maximum) build up which would be expected when two spirals remain in overlap and the latter (I(t)) allows for a steady discharge which we see as the current state of entropy in our universe (again, very high initially and slowly decreasing but never reaching zero).  One possible problem with the model is that the intersection spiral model provides for a longer period of “charging” the capacitor than the period of discharging.  This is a problem which is actually solved by observation.
Observation solves the problem by providing that a certain amount of the “charging” is made “stable” so that the discharge period is necessarily shorter.  By way of example in our current observed universe:
We have an intersection roughly modeled on V(t)=E(1-e^x) where x is t/RC.  However, within this model, certain amounts (approximately ½ of 47%) from a linear perspective (53% overlap vs the ½ of the remaining post overlap) is “trapped” in a higher stable CT state.  Note that if you follow the spiral in the opposite direction you get an opposite but equal result.
Hence, once we add in compression, we get a consistent model to observation.   This is not to say that the intersecting spiral model and the capacitance model fit together to perfectly define the equations for overlap and that the post intersection spiral and discharge state define perfectly the state in which we find ourselves.  That would be nice, but we’re not there yet.
A perfect design of the spiral would require a more perfect understanding, but these models give us tools with which to study how this occurs.
Similarly trying to figure out the point where stable compression occurs is fairly complex especially if you use the model of stacked solutions to quantum points of spirals shown before where there are spirals of different lengths for every quantum point in our universe at a quantum instant which is the easiest model to envision coming into existence “spontaneously” which we’ll call “the near infinite spiral” model (NIS).
The point of intersection where V(t)=I(t) is a likely place to start in the analysis, it usually appears at the half way point, but given the variables involved this location is more likely a coincidence than a point where the solution occurs.  The reason that it have any value is that it seems to bear some “resemblance” to the relationship of 53% to 23.5% in some of the more simple models, allowing you to have a conversion of 26.5-23.5 or 3% of the lower states to the new higher ct state.  While this does match some of the simple models, the actual concentrations observed (e.g. 1/1,000 of matter is black hole material; the seemingly near infinite comparison to the amount of energy to matter, the seemingly even large comparison of space to energy) seems to suggest a different result).
Still, given the fact that the 3% which would have existed 11 billion years ago when discharge began would be largely dissipated to reach our current place on the spiral, it is not to be entirely ruled out.  That is, the 3% at the point of what we’ve called “the” big bang, could have, due to discharge/decompression/entropy, changed to the 1/1000th that we observe today.

  It is not an ending place, because accepting the near infinite spiral model (NIS) you are liable to have many different points where V(t)=I(t) for solutions to many intersecting spirals during the period in question and you will have intersections at all points during the universe; i.e. you will have mini big bangs (capacitance followed by discharge) going on constantly.  What makes a notable big bang different can only be the solution where a lower CT state becomes a higher stable CT state and the discharge afterwards.  Fortunately, or not, this appears to be observed.

Friday, April 22, 2016

Putting off the hard stuff

I am apparently down to one or two of the more challenging posts.  i'm still shooting for an early May publication date for Volume 1 of the second edition.  That will mean leaving a lot of the math for the second edition, but it will give the newest version of the entire project in one fell swoop and will put me in a position where a fully formed theory going far, far beyond the big bang is a reality, not that it doesn't appear in a more disjointed form in this blog.
It will have a lot more consistency.  The current first edition is not even legible to me now.  I have moved on.
It is very green today. The pool is green and contains a pair of my glasses I dropped into its murk while I wasn't paying attention, hadn't prepared enough.  Preparing and paying attention would have simply meant removing the glasses from my pocket before I bent over.
  I've started the process of making the pool clear again, but before I do the next step I have to go into that now highly chlorinated bog and try to find my glasses or write them off, something I'm not willing to do without trying.  It will be the second time I've gone in there this year.  The first time it was still quite cold and it took my breath away as I dove deep to retrieve one of the filter covers out of its depths.  That was enough to keep me out even as it grew warmer every day.
Perhaps I'll even do a workout although the chlorine is pretty high to stay in there.  If so, it will be the first time this year and it will have to be a short one unless I want to bleach myself.  The bells chime 8, I am late for work.
I really require a very little pool, but I think the grass and the trees are important.  If I end up moving I will almost certainly not have a pool or grass unless something unexpected shows up.
I have quite an extensive list for work.  I have to finish a grant application (not physics although I need to work on that also once I have the second edition volume 1 published-days away...?  Certainly no more than weeks.
I can see the end of my legal work and the high tech way my partners are moving may make it less nerve wracking for me than it would be otherwise, although the problems out there certainly gave me a lot to think about on my walk yesterday evening and kept me up last night.  I like the way that other people in my past would get up and creep around at night, but I think I am like a bad drunk when I am up at night, heavy feet plodding along as I feel sorry for myself.
I have been listening to the autobiography of mark twain, the long one by the mark twain project or whatever they call themselves.  At some point in the distant future I'll get to the part where the auto part starts.  For now it is just setting the stage, a hundred pages of thank yous and an explanation of his problem with self analysis, the self loathing he felt, the failures he could not abide which allowed him to be so funny and which I understand all too well.  He could not easily accept these things and pass them on to the reader.  He could not make his writer's eye see the events and thoughts clearly and without the need to presesrve the self as opposed to providing the raw facts.
Having just read about Grant, the references to his common work with Grant surrounding Grant's autobiography seems familiar to me.
It is time to go, even late has its limits.

Thursday, April 21, 2016

AuT-compressed quantum change and time dilation

We have seen that speed and sharing more ct1 states by a higher state gives rise to time dilation.
This raises a curious question as to what happens when you have higher compressed states (231).  It is clear that a ct0 state may only change in one way at a time, presumably from non-linearity to linearity.  A ct1 state presumably can change is at least one dimension at a time (2^1 is 2 however).  Ignoring for the moment the two faces of ct1, ct2 has sufficient compression so that it might be possible to imagine different compressed components changing in more than one coordinate at a time.  This would be more pronouced for the higher states.  Given the lack of separation between quantum phenomena, like photons, we can ignore, to some extent, ct1 and ct2 as having insufficient separation for multiple ct1 states to become engaged with any  point source.  While ct3, wave energy, clearly interacts with more than one ct1 state at once and while we can be certain that this interaction is with multiple ct1 states, the fluid relationship between ct2 and ct3 (possibly tied to the relationship indicated by the compression equation: F(series)(x-2,x-1,x)^2^x) might eliminate this as a significant factor.  Once, however, you get to the more "stable" ct4 state (10^2^4) or (2plus3plus5)^8 compression, the interaction with multiple ct1 states becomes more complicated and only with acceleration do you obtain time dilation whether by spin, vibration or straight acceleration would presumably have differing effect, although it may not be measurable.
This will be explored in more detail in Volume II.
The change in CT2 and 3 is at such a rapid pace that it is likely that the effects of time dilation would be limited because of how much ct1 is contacted due to the movement.  Perhaps the study of "trapped light" would give unique insights especially if compared to the study of accelerated particles approaching light speed or those super heated to get a similar effect.

Wednesday, April 20, 2016

still 3-5 years

Here is one short "fun" post.

http://www.aol.com/article/2016/04/20/stephen-hawking-black-holes-could-be-portals-to-other-universes/21348196/

Steven Hawking I believe only wanted to make a metaphor.   I do think, like everyone else in quantum physics, that he is 3-5 years behind me, after all in 2012 his book was one of the two items that ultimately led to NLT-nlc-AuT.    However, despite my thought, being led by primatives physics, that black holes might all go back to non-linear space (g-space) and hence possibly allowing movement through many universes, it is now pretty damned obvious that black holes are just more condensed information.
For those of you who haven't read the last blog, you'll note that we very well might all be living in multiple universes at once.  It is most likely.
Unfotunately, it's lonely at the top.  I am lonely, I am destitute, I am half blind, I over use the phrase I am, I am in love, but loveless, in shape but steadily moving towards shapelessness and I am so damned old that I don't know what I will do next.  I told someone today about what I was thinking about doing with my life and he suggested I should reconsider and retire, but that is not an option.
I'm reaching out to find something challenging to do, but who knows.  I will even go to China if that is what it takes to make things work.
I still hope to spend some of my time lecturing on physics, bringing the world up to me, but that is something that will likely have to wait till next month now, the current month coming to a crashing end, not because of what I did but because of what others didn't do leaving me to pick up the pieces.  S--t.

AuT-friction and capacitance

So here at last is one of the two dissatisfying posts on capacitance.  It is not very well edited, but if you come back to it next week, you may find it more clearly making some point or other.
Time dilation is most easily handled by having one spiral change at a rate separate from another which allows for the observed gradual transition.
Unfortunately, those type of pre NLC cheats while very effective don't work well with common pont changes.
Likewise having changes occur in one universe separate from another might give a similar effect.  However, the points must intersect at a common location, the solution has to originate from a singularity.
1 bit
2x2bits
4x3bits
8x4bits
16x5bits
each change of the 1 bit is equal to 8 common changes of the 4 bit particles.  As the 4 bits accelerate they begin to change not with just the 1 bit but with multiple other 1 bit points.  Interacting with two separate 1 bit changes would cut the relative change of each one in half.
The "observed" capacitance (compression followed by discharge - sudden in the case of big bangs, gradual in the form of entropy or a combination of both) happens at intersections of spirals and hence there are several ways for this interaction to be effected.  One obvious one is gravitational interaction.  This is complicated by the fact that mass and distance are illusory and by the fact that distance should approach zero at the point of intersection.  Another model is the frictional model.
 https://www.youtube.com/watch?v=LH1--T1g3wY
Friction is complicated because it involves all of the interactions between two intersecting bodies, unlike gravity which is a single force (in this case linearity).  Force (measured)=mu times (mass of the object times acceleration due to gravity). This formula works poorly in the spiral model.  If you have a spiral and an anti spiral intersecting, presumably the force would be something like gravitational acceleration^2, but gravitation is a function of mass and the effect of gravity incorporates mass and distance which in quantum terms doesn't exist.  For this reason we need to replace gravity with the change in information and in this case we have information changing at the same rate in opposite directions.  Presumably, this interaction peels off spirals and leaves them stacked up just like the friction of wind or water against sand making ripples or, as they are more fondly known, sand dunes.
Fortunately we exist in a time where everyone has done everything at least once, so we have some derivation models for sand dunes.  http://eprints.maths.ox.ac.uk/764/1/cocks.pdf and others for fluids http://www.maths.adelaide.edu.au/yvonne.stokes/pdfFiles/AFMC.pdf or http://link.springer.com/chapter/10.1007%2F978-3-540-89465-0_371#page-1 and my personal favorite ftp://ftp.math.ucla.edu/pub/camreport/cam13-51.pdf
The creepy nature of things provides that the formation of dunes occurs in a spiral fashion, but that is more a function of gravity than something otherwise indicative of the process.
The equation that first appears is c (rate of movement-or velocity as I understand it) = A(v^2-B)m/s where A and B are constants and hence not particularly relevant to our inquiry. (note the notation used of ms-1 in the paper a quaint english thing instead of m/s).
Other equations using two dimensional flow (two dimensions are all we need for intersecting spiral theory uses eddy viscosity and vorticity.
An interesting equation using s-the height of the bed relative to some reference height, n is the porosity of the substrate (in the case the spiral being intersected), q is the bedload transport rate and fs is the source function corresponding to the exchange of sediment between the fluid and the bed.  Delta is a two dimensional (x and y) differential operator: Delta=(a/dx,a/dy) and the overall equation is (1-n)ds/dt+Delta*q=fs
This type of equation is of interest to us because if instead of a fluid and bed we were looking at grains of higher information concentration within a thin concentration of space.  We are not going to find a matching model, but we are looking for the "type of equation" that works in an intersecting spiral capacitance/compression scenario.  Fortunately, the compression is a relatively small  part of the equation.
The major part is the compression of time followed by expansion which can only be explained where the algorithm for time moving together (53% of the way together) due to the collision of algorithms means that for that 53% of the time necessarily the intersected information states must be occupying the same place and must, therefore, be compressed.  How in a point environment and this one in particular you get any compresession at all is important to determine, but is most likely a function, in the multiple universe model, of a sufficient number of the common time states of the different universes/spirals lining up to hit the magic number for stable compressive states.  In this regard "stable" has several meanings because entropy (which is nothing more than the discharge of the capacitance) and radiation both show that even what we call "stable" compression is not very stable.
Likely the decompression of space then is the slow separation of the intersection and the slow change of common solutions to the algorithms which occurred as a result of the collision of oppositely moving spirals in this solution.   There is also the possibility it results form spiral and anti spirals ct1 states cancelling each other out and then slowly separating so that they each can be effective which would simply be a spiral equation where ct1(tot) at any quantum point would be = ct1(+)+ct1(-) where plus and minus refer to which spiral the ct1 time states were in.  The combining of diffferent spiral universes in conformity during the intersection of spirals is a cleaner solution, but either can be incorporated in an algorithm that would work at least some of the time.
So in answer to why this type of solution?  Because we are interested in the accumulation of spirals and these perhaps correspond better to the movement of grains of sand then the forces that are generated by spirals which would have to be converted to spirals.
The math doesn't hold any immediate solutions to me.  One can almost imagine the more compressed states (ct2, 3, 4 and 5 in our case) interacting with the spiral to create a disruption of the ct1 states causing them to spiral around the  larger compressed state just as a wing passing through the air causes a disruption above the wing giving it lift the difference here is that the "friction" of the colliding states loads spirals onto the higher states.  CT1 being dimensionless is affected differently from the other states, but the whole idea has to be driven by common algorithm solutions since quantum time states are not moving  and dimension remains illusory regardless of how many dimensions are incorporated in the algorithm.
A side note about reactions:
The speed of reactions is very slow.  Light speed reactions only occur at speeds that are significant.  And yet, any talk of reactions is a discussion of time and time takes on unusual properties in AuT.
Even a discussion of light speed is irrelevant since that is a speed infinitely slow compared to the speed at which the algorithm works by definition which is the logical proof, if not the mathematical one, that light speed is only the distance between quantum changes.  It is the difference between x=1 and x=2, nothing more or less.  The difference between x=1 and x=3 is double light speed.  Since all solutions exist simultaneously, you have the distance between infinitely distant solution simultaneously proven.
For example, looking at the direction of travel of the spirals and talking about our perception of time, you see our time as going in one direction.   However, the algorithm solution of quantum moments means the direction of travel is irrelevant because there is no true "movement" in either direction.   There is only a solution when x is larger or smaller.
What this means is that Algorithm theory, as it must if Einstein was close to right when he said the only reason for time was so everything didn't happen at once, then the algorithm solves for any time at any x.  This means, that Einstein was not exactly right.  Time doesn't exist for that purpose because time is only discreet solutions for x and the order that those appear does not affect the algorithm at all.  You can solve for x = 1 or 5 zillion and get a relevant solution irregardless of any other solution.
In this way, AuT solves the problem of how you can have an infinite number of spirals and yet solve for any specific moment in any of the spirals which is the same as solving for them all at once since they exist outside of time.
The model includes several features that give us experience, but what we see as the direction of travel of time is not one of them.
The solution including capacitance does appear to be a part.  Likewise, the energy of the big bang appears to be an instantaneous solution point where, for example, the direction of a curve changes or where a solution goes from positive to negative.  These will be discussed in connection with the math on capacitance, but we do not have a true capacitor because while the energy is built during the frictional type of overlay of oppositely traveling arms of the primary spiral for our universe (understanding the next larger unvierse will have a different primary spiral but will incorporate ours in its solution) meetng at the 55% overlap, we know that during this process spiral arms are loaded onto each other as part of the solution and that certain sub-universes are alligned to create higher ct states (at least 5, probably 6 at the last 'big overlap bang') and that at the end of this process, all of the capacitance that had not achieved stability was released in the form recognized as cosmic background radiation.   The solution to this release of energy may be gradual or it may be instantaneous (unlikely but consisten with big bang concepts and with critical point differences), but if the later, it provides a different solution.  Either way, despite the huge amounts of energy inherent in such a release, clock time states would remain in tact to the extent they had reached stability, but the energy might provide a mechanism within its solution to split otherwise stable clock time states temporarily meaning that the forces of fusion that we see were necessary to bring them back together.  That is, the solution of the algorithm at high energies might have different outcomes than the solution at later points along the spiral.
The big bang that we "think we see" (remember that a constant discharge could provide a similar background echo) does not mean that we exist outside of an overlap period although expansion suggests this to be the case.  During the overlap period, the build up of capacitance would likely be reflected by large energies built around the overlap of spirals that is occuring as well as matter build ups, but there is so much more energy in this theory (and space) that the concentrations of matter of a very small fraction of the total amount of information in ct1, 2 and 3.
In such an event we might either be looking at a past big bang before the current discharge big bang occurs, or the big bang may not be a solution at all.  What we see as the big bang could be nothing more than background raditiation reflecting this overlap of intersecting spirals.   This would be a good solution for two reasons.  First, it would allow that the background radiation that we observe could still have a solid explanation that gives it greater consistency during the period of overlap than even a big bang while also allow that at the end of the period of overlap we would have a more expected and gradual discharge.
The capacitance equations usually show a larger initial discharge that would be "big bang like".  Expansion indicates a post discharge state, but it remains possible that the background radiation reflects the energy of the capacitance/frictional overlap state and that we are currently experiencing a gradual, not explosive, release of the capacitance which takes two forms.  One is expansion through the degredation of partially compressed time and another is a steady return to a neutral state before the period of inactivity reflected in the spiral model before the universe begins to contract under the force of the inward moving spiral.  The solution for the contraction period before overlap begins the build up of energy and further compression of space and other ct states to higher stable and unstable ct states would have unusual consequences, for example it could be reflected in features of space time that would consider "backwards" and would include a universe which was contracting due to forces that might not be easily recognizable to us, perhaps an opposite gravity.
In a prior post to, well you know who, I described it as a universe where tiny teacups broken in anguish might reassemble themselves.  And perhaps hearts too would mend.  Not bloody likely

AuT-solutions to the dark matter-space equation

As I pointed out in my prior post, all the pre AuT 'brainiacs' who have cushy jobs with nasa or some university while I, perhaps the most gifted quantum physicist of our time-well most gifted probably is a slight exaggeration, let's say the most clever amateur physics thief (since I'm stealing my stuff from Parminides and some Greek turned me on to him) am essentially starving in the streets looking for loose band width so I can lift up another post; because for some damn reason no one is offering me a job or at least the opportunity to present my theory for a fee.  What's wrong with the world? (answer-defective algorithm).
Anyway, you will miss me when I'm gone and wonder why you didn't reach out.
Let's take a quick look at why 25% dark matter doesn't compute to 25% space to go along with the rest of the space that NASA actually saw (amazing they saw any of it, right?).
First evidence is the amount of overlap.  It's 55% for whatever earthly creation for god's sake.  With that kind of tension, you should get a lot of compression.
But we need an equation, where can we find an equation....?  Oh yes, that's right there already is one gosh darn it.
So the 25% dark matter breaketh down this way:
x%x3^2 (for photonic to space)
y%x6^4*3^2 (for energy breaking down to space)
z%x10^8^*6^4*3^2 (for matter breaking down to space)
and a whopping
a%x(8,5,3) 16^16*(etc) (black hole material breaking down to space).  That means if most of this space were trapped in black holes you'd have that huge number I referred to in the earlier post for how much more space there is then we give the universe credit for because we happen to have figured out a long, long time ago that space is pretty compressed at these higher states, exponential on exponential.
It's important to point out that the existence of fusion and fission in our universe is entirely consistent with the model of multiple smaller universe of data generated by the algorithm which are still, no matter how small, mostly space (except maybe the first one) which means that there is a constant recycling of information from higher states to lower states, but this part of the equation is perhaps the most poorly defined and is more based on the assumption that the universe we live in cannot be the largest because of the way that intersecting spirals continue to grow forever and that we must be a function of smaller universes (perhaps the 10^100 such universe suggested by the exponential compression function above) interacting with ours.  Whether the infinite number of larger universes have any effect remains to be seen.
The drawing below shows the intersection of three spiral universes.  The number picked is at random.
The circle marked "c" represents a point where the solution is common for a common x.   In essesence the circle represents the algorithm being solved for as many spirals as ar coming into that circle based on a certain value of x.
The drawing on the left shows a larger "universe" A intersecting with a smaller universe B.
The six pictures on the right show how the common solution at the point represented by C changes states as the solution shifts around the still smaller universe D. In this way while A and B are still space, D may shift states several times and, indeed, while A is still space, B will shift several times.

The primary point we are looking at here is that compression, stable compression, is made up of a very very small amount of outliers.  Most space has stayed as space, at least in terms of stability.  The compression during capacitance related to the intersection of spirals is perhaps 99% unstable (perhaps very close to 100%) so that the time for the universe to compress according to the algorithm would be very great indeed for our universe, although the first universe would be shorter.

Tuesday, April 19, 2016

AuT-Compression vs capacitance and another name for dark matter

There will, shortly, be an unsatisfying, largely mathematical post dealing with dune formation, Bernoulli flow issues and the like.  There will be a related post on the derivation of the capacitance equation, equally unsatisfying.  While these models hover around the correct answer, the exact model, which has to go with the equation (sin pi/2) given earlier for the spiral formation and with a compression function which we will deal with this this post.
Ignoring dark matter (97% of the universe will be used for dark matter-space) of the 3% remaining, 1/1000 may be stellar mass black holes.
We could use a number between 3 and 5% but I have a reason for the smaller number.
This indicates that at the point of compression where we find ourselves, the compression of space into black holes through 5-6 states and 4-5 intersections (the first state existed without an intersection assumptive).
if a black hole is 10 (10 to 20) x our sun's mass and an "average" supermassive black hole is 4.1 million times time size of our sun then the difference is only a factor of 10^9 larger which may or may not work out well.  Fortunately there are black holes that are potentially billions of times our sun's suze which adds a few zeros (10^12).  Even these, presumably may be only accumulations of ct5 states which would mean there are no true ct6 states in the universe at this time.
Given their relatively small part of the 3%, We are assuming for this that the super-massive black holes are NOT ct6 for this discussion, they can easily be just large accumulations of stellar black holes.   Hence we have only 5 states with 4 intersections.
Now pre AuT science doesn't understand that space, energy and matter are all information hence (see the article by NASA below) they come up with a number of wrong assumptions and they confuse the make-up of space with the results of information theory in a capacitance/compression framework.  But I'm not here to dis NASA, if they'd figured it out there would be nothing for me to do.
Likewise, they don't understand the effect of an algorithm driven universe, so they're stuck with the resulting forces rather than the cause of the forces which leads to even more confusion.  It's like they're looking a shadows on a cave wall and trying to figure out what's happening, that's the allegory of the cave, I suppose.
None of this is overly relevant to the discussion.
But it's important to understand that the entire concept of dark energy is just the failure to recognize that after the period of capacitance/intersection/compression there is a period of decompression which is what is driving the expansion of the universe.  This decompression is a fairly simple concept, the compressed, but not compressed to the point of stable compression space is 'leaking' out of its compressed state and forcing the universe apart. This is NOT a physical event, instead it is a mathematical event, the algorithm running according to some version of the capacitance equations we will cover shortly.
So much for science.  Why is this relevant to our discussion other than to point out how poorly pre AuT science saw things?  It's because the 71% dark energy envisioned is really the amount of space that failed to compress to a higher state during the last intersection of the intersection linear F-series spirals.  Pretty simple stuff!
So we have, potentially by this analysis, but uncertainly to some extent because the 71% doesn't line up with the 24% dark matter perfectly.  The reason is that the use of dark energy vrs dark matter is a poor analysis for weighing these two and until someone does the analysis knowing what they are looking at, we're potentially comparing apples to oranges.
However, for purposes of this post we'll make yet another (very risky) assumption. We're going to say space is 95% of the universe after 5 compression states.
So we have this type of transition:
Original universe: 100% CT1 (space)
Post Collision1:xCT1, yCT2
Post collision 2:x''tct1, y'ct2, zct3
Post collision 3:x'''ct1, y''ct2, z'ct3, act4
Post collision 4: x''''ct1, y'''ct2, z''ct3, a'ct4, bct5
At post collision 4 (which we may still be in) we are going to say that 95% of space is still space.  This means the other states are 5%.  We are taught that bct5=1/1000th of the universe of matter, but it appears more likely that ct5 will one day be that high but is actually much lower at this quantum moment or this one or this one.  While ct4 is a very big number indeed (1000 times or more than matter) we have to assume that energy is much higher still.  It would be nice to be able to say each was 1000ths of the next, but that type of progression seems inconsistent with compression.  Indeed, but the design of the underlying algorithm, it would appear on the surface that the universe might not have the "energy" to fully compress.  The algorithm model for a single spiral is relatively simple by comparison to the actual compression of every quantum bit of information in the universe where each bit at each quantum moment must be solved for x as x is (not goes from) 0 to infinity.
Looking at the post collision numbers, what we're left with for the moment is:
post collision 4: x''''ct1 is >95% of everything else (as we will see this is a "much greater"), y'''ct2 is >1000times everything else but ct1, z''ct3 has the same general concentration as ct2 (this is something of a grey area due to the interchange between these states and is again all energy is probably "much greater" in terms of information content than ct4 and 5) , a'ct4>1000 times ct5.
Each (') assumes the amount of the ct state in question is growing at the expense of space (ct1 getting constantly smaller, but remaining the lions share of the universe for now) or other states.  Ultimately we'll have two ctx states with essentially no space between them, but that is a prior post.
What we're seeing here is that during capacitance, approximately 71% of space is compressed, but that at the end of the day, even after 4 collisions, only 5% of it has stayed compressed.  I believe the number is closer to 3%, but because we're calling (idiotically) decompressing space by the totally wrong name- dark matter, the amount of it could be far greater.  Indeed the 3% number may be impossibly far off and yet until I figure it out we don't know...or do we?
It just so happens, that I have figured this out, previously, in principle anyway.  If we look to the compression charts I put together based on the number of intersections and the amount of material changing at each intersection and we take the total amount of information at any quantum moment, then we should be able to figure out how many spirals it takes to turn the universe into a compressed mass.  At each compression state a number of these are "aligned" in terms of ct state if the universe is made up of smaller universes.  Some of these cannot fully compress meaning there will always be lower states and this "indicates" that at full compression "our universe" (as opposed to the next one out with an additional spiral past the spiral that we have) you'd have at least one of each lower state (one space, one photon, one wave energy, etc) because those other universes cannot collapse all of the way.  If the total compression state of our universe was, for example, ct 170, presumably you'd have at least one of 169 other states at fully compression.  The actual indication is that the number would be higher because many of these sub-universes would have the same level of compression based on the idea of one universe growing from each smaller universe at a constant rate. This is the same exponential growth that you see in the growth of spirals reflected in this drawing but with each line of common length being an additional universe with the number of ct states as those others of common length.

This will probably be a solution saved for volume 2 because the 300 or some intersections that I have looked to previously may be woefully inadequate.  Indeed, the number could be as high as 10^100 spirals making up the primary spiral of our universe!!!!  Probably not that high, by the way, but there is a model that suggests it which I will get to in due time.
For more on the wrong answers, see this list of articles.
http://map.gsfc.nasa.gov/universe/uni_matter.html
http://www.universetoday.com/112500/how-much-of-the-universe-is-black-holes/
http://science.nasa.gov/astrophysics/focus-areas/black-holes/
http://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy/
https://profmattstrassler.com/articles-and-posts/particle-physics-basics/mass-energy-matter-etc/matter-and-energy-a-false-dichotomy/

Sunday, April 17, 2016

Aut-a view of g-space-

G-space is capable of evolution, but not over time.
G space can only partially be defined by what it is not.  It has no rules in the sense that we experience them, it does not obey any of the laws of thermodynamics.  Physics to g-space is only a word and words, having length mean it is less than that. There is no past and no future, everything is now, the past and the future are changeable and interchangeable.
it is a place where a pre-assembled universe like our can be assembled without reference to time.  It is a place where the algorithm which defines at one time all the sub-universes and subsequent universes on either side of our universal solution that provides such a seemingly solid existence can exist all at once, everything happening at once because time is only the feature of the solution which corresponds to some x that once inserted solves the equation of the algorithm for the universe at that value.  Yes, x can be said to represent time or position, but it defines a single quantum, it is not a space time universe between x equals one number and x equal another, it is a series of discrete answers, otherwise it would not be information, it would be a minimum point but would, according to rules of spacetime, have to be divisible.  The logic fails.
G-space is the land of dreams and science fiction authors, but only where the books have no beginning or ending and where the middle is created before the author sits down.
It has no physics or time or dimension as we experience them, nor even a singularity as we experience.
Even data points have no set number of answers.  It is not yes or no, it is not yes, no or maybe.  Instead it has any answer depending on the algorithm being supported.  In g-space infinity exists by virtue of the lack of the finite.
The transition is likely from a maybe universe to a yes no universe which doesn't grow to a f-series universe which fluctuates with turns to an f-series universe which spirals.  This raises the question of whether there are other f-series universes (or other types) such as an f-series with turns at different rates or offsets from one another.  This is the accepted method for building the universe which seems solid.  It is solid because of the number of information states affecting any quantum point at any quantum moment and this number is the only number that can vary.  So if 100 ct1 states share a common point with one ct4 point and another ct4 points shares only 1 ct1 states, than one will appear to change at a rate 100 times that of the other, but in truth the number of corresponding points must be much higher for both, because stillness doesn't occur with ct4 states which necessarily then because of their rate change coincide with many of ct1.
As has been said before, for the universe to exist at one time, nay to comply with Einstein's belief that without time everything happens at once, g-space must be something that can "hold an algorithm."  It is really nothing more than a galactic chalkboard.
If space were closer together at big bang it can only mean that more ct1 states are in common with each other, more spiraled together to reduce the overall amount of space.
The ongoing collapsing stars means we are within one of perhaps several spiral collisions defined by the algorithm even if there is one main algorithm to our universe, because there are not only one colliding spiral in the algorithm as shown in higher state collisions we are capable of, no matter that we are controlled by the physics generated by the algorithm.
at quantum instants physics changes entropy reverses gravity reverses. because of spirals off of spirals exponentially more instants can fit on an instant the number defining a rate of change of the group while all individually change with each other since all are a single solution based on the single variable, x.
The algorithm may define a nlc horror story. nothing we can do. maybe we are on a long drop maybe the short distance after collision maybe within a collision, but when the spiral equation solution changes, everything must change with it, including the underlying phsyics.
spirals coming off spirals coming off still other spirals at quantum points using same math creating stacks of exponential claim.
speed is not dissasociating from other points, it is associating with more of a lower state at one quantum instance.  one quantum point at a time creates only the illusion of being linearly disposed because that is the solution, but g space doesn't require a single x be used, all xs must exist together and the solutions to the algorithm in g-space all exist together, a finished infinite set of universes where the solutions, all the x(s) happen at once, just as Einstein would have predicted.  G-space is the space where all the possible values of x exist for an algorithm solved for x.

Saturday, April 16, 2016

The soul is in the algorithm

Occasionally I revisit the history of AuT.  This is just my way of putting off the real work.  I have about 5 posts written on time dilation and capacitance.  I have to edit them to finish volume 1.   Well, I don't need to but I have to understand this as well as possible.  To understand it well enough to publish it.
One of those is the math of capacitors.  It's part of the solution to the algorithm.  Why does it matter?  It matters partially because this is about the answers to the universe.  But it is also about me.  There is a difference between me and you.  You are capable of being normal, I am not.
I also occasionally think I should print some new business cards that say, "mad scientist".  But in truth, I'm not.  Nor am I normal.  I'm something else.  We are all in theory, creations of the universe.  But that is not the issue related to why I have to have a better solution.
The reason is that I have to distinguish my universe. the one driven by an algorithm, instead of the superficially  similar Hologram theory.
If Hologram theory is anything, it is close but relies on dimension and projection.  It therefore requires the ultimate crutch, time, to exist.  It is as far removed from AuT as you are from Zeno.  And, indeed, in terms of distance, how many miles, rotational and spiral, have we moved since then?  Perhaps I will do the math, if you read the book, you'll see the calculation for how fast everything is moving, how fast it has to move for algorithm theory of one directional movement for all dimensions to be accurate, and you know it is.  This incredible determination, only available from AuT is one of the reasons I know that I'm right, and one of the reasons I know that everyone else is wrong.  It's interesting, but this is what Parminides knew and what was thereafter forgotten.
If I'm no different from Hologram theory, than a little piece of my soul would whither and die.
I don't think that has to happen.  Perhaps, there is no soul, but something animates us.   An algorithm model is better at animating than anything else that I have seen.  It is better than anything else I have created.
  Just Chemistry doesn't really do it.  Something spins the universe, something spins us.  The rules of physics apply, and something spins the rules of physics.  The spark that moves us, and therefor the spark that is physics is a solution to an algorithm, whether it is traditional pre-AuT math or Algorithm Universe Theory math.  I believe that there is something to the concept of the soul, just as a careful analysis says there is something to the AuT theory.  So it stands to reason, in a dimensionless, timeless universe, all our souls, therefore, must be incorporated in the algorithm.
Anyway, AuT stated 2500 years ago when Zeno's teacher determined there was something wrong with the idea of infinitely divisible length.  He didn't have information theory, but he figured it out.
It was then largely forgotten and the fiction of what would later be called space-time was maintained through AuT.
The gradual study of forces without reference to the underlying reason for them was a large part of the development process.  
The Einstein-Bohr era physicists came along and began the process of determining the underlying forces, but they remained trapped in the dimensional world but provided an essentially accurate model of a universe, but continued to look for a unifying force, something that would bring consistency to a universe that was dimensional and time based.  They had only the very beginnings of information theory and did not understand its importance.
This concept of space time was eventually so surrounded by information theory that the idea of the universe as a projection, hologram theory was put forward and, in fact, the combination of Einstein's separation of time from events and hologram theory were the basis for the predecessor theory of AuT.
The problem with projection theory and space time lay in its idea that there had to be some feature of dimension to operate and that time had to have independent relevance.  Since time and space were tied together, it failed to have the properties that were required by Perminides; or perhaps more specifically, it had properties that Perminides figured could not be part of a solution.
It is, perhaps, the next logical step that eliminating space and time entirely in favor a solution based system would finally deal with all of the issues but one, the question of how a durable equation could exist.  But AuT even answers that is a discomforting way, because dirability is not relevant in a hologram driven universe because durability requires time.
It make AuT theory discomforting.  And life, especially right now and especially to me, is discomforting.  I have the small amount of life capital and I find myself spending it at an alarming rate.