Pages

Tuesday, September 15, 2015

NLC-time orbits after the application of F-series part 2 Inward moving spirals in an expanding universe

Discrepancies, what discrepancies?
There are some alleged discrepancies in any theory of the universe, necessarily if it doesn't largely follow mine which has to be right in certain respects, even if not in all of them.  It is too accurate in prediction and too novel in scope to be otherwise.  The question of how the spiral equations does or does not fit in the the static universe defined notwithstanding and the nature of the spiral, whether the quantum lines or the true spiral or perhaps the two, the true spirals being limited by the mathematics of the intersecting lines.  This theory has more than its share of problems and we will address those one at a time until I get bored.
NLC provides for a static universe without the need for an einsteinian cosmological constant, but such a constant may yet be part of the overall equation.  CC is the power of dark energy that causes, in traditional physics, the expansion of the universe.  Not surprising (to those who read this) the increased acceleration observed in the universe is reflected in logarithmic spirals (as you move outward from the center, the spirals get larger, if the information content of each spiral remains constant then you are going to see some expansion, that's the nature of logarithmic spirals (as opposed to perfect achemedian spirals).
The other aspect of this is going to allow us to better estimate the number of spirals, something that I promised to do earlier.  The cosmological constant (CC) needed to offset gravity to give a relatively stable universe at "our stage of compression" is around 10^-52 m^-2 or 3x10^-122 PU (planck units).  What the intersecting spiral-NLC analysis attempts to do is replace (to some extent) this "false dark energy" to "false space time" which an actual dimensional change based on a set or fixed algorithm. The algorithm can be anything under NLC but under interesecting spiral-NLC the algorithm is the one used to generate lines which intersect using F-series calculations.  It should be noted that this is primarily a difference in "approach" as the concept remains fixed because a universe with rules and a fixed, unchanging universe are both the same for purposes of finding explanations, the difference philosophically can boil down to the presence or absence of randomness, the existence or absence of an "uncertainty principle" in quantum theory (quantum mechanics in pre NLC thinking) which will also be addressed.
The measurement of distance is potentially useful, but fortunately is not necessary to this analysis.  Instead, what we can look at is a ratio between energy density and the density requirements for a fixed universe that doesn't expand forever or collapse on itself.  This ratio for us works much better and is a function of planck length and comes out to around .69 or 69%.  It would, of course, be more satisfying if it was 61% but the numbers are close enough to make a point especially given questions about how the calculations are done and the uncertainty surrounding planck length.  What's the magic of 61%  61% is the difference (approximate and zeroing in on 61.8034% but never quite reaching there) between one F-series number and the next higher number.
In other words, in a very rough sense, the ratio between dark energy necessary to "re-expand" the universe against gravity and space density collapsing the universe is around the ratio between logarithmic numbers.  You get similar ratios analyzing the "area" of logarithmic "true" spirals and can find, for example 72% (approx) of the area within a spiral vs the area outside, and much higher numbers using a comparison of the area within succeeding spirals where overlap occurs from where it doesn't but all of these ratios are function of the basic ratio.  Does this allow us to more closely determine planck length based on logarithmic spirals?  Maybe.  The key to the theory bearing fruit is the ability to provide a specific formulation.  The use of linear (vrs curving) intersecting spirals yields a variety of ratios, not just one.   But if we have a universe which is collapsed but which at our stage of compression is not collapsed, an the ratio of "expansion" force to "collapsing force" is 69% and if we have to have an algorithm at our stage of compression that allows for a collapsing universe, but gives the appearance of expansion if we look back (towards the outer spirals) then this number should be less than 69% and more than zero and 61% works just fine.
Of course someone is approaching this from another direction (stuck in space time still) using something called scalar fields which is basically the CC but changing over time (logarithmic spirals change over "time" too).  They don't give a good mechanism, but having an algorithm in place that controls the display of information which is fixed allows for this to be "explained" at least in terms of non-g-space; remembering that without a g-space explanation we're still scratching the surface like a bunch of chickens pecking for corn on a dirt path.  But the point is that "time orbits" defined by intersecting logarithmic spirals provides a way for mass ratios (electrons and protons) to remain constant while other changes occur, in this case resulting from the number of coordinates changing at once.
The next step in the analysis of spirals is to see that it supports the approach-i.e yields-information theory (the "surprise" answer is that it does if you're willing to mess with the spirals a little within the framework we have defined) but you have to wait for that, there is work to do.

No comments:

Post a Comment