Pages

Saturday, February 13, 2016

NLC-The end of scientific inquiry in a universe governed by logic

Is it possible to prove NLC?  If it could be answered in the affirmative, then NLC would be the end of scientific inquiry.  This is not at all because NLC would provide any specific answer to the many questions that might be asked.  There are too many inquiries based on the imperfections inherent in any system which has as much data as ours does.  Likewise, the scientific method does not allow for fixed answers. This is the illusion of advanced science.  People think they hear the answer to something.  The scientific method only has room for theories which must be continually tested.
The lesson of relativity is that reality is not what we thought it was.  NLC takes this to a new level, albeit one that hologram theory and even traditional physics suggested.
The problem with other theories is that they stopped one step short of committing to a one directional universe.  They refused to abandon dimension, time and, well, spacetime.
NLC being pure philosophy had no problem abandoning spacetime.  It would be science fiction, but for the fact that it arises from observed phenomena and is bent towards observations.  If you follow the evolution of NLC (EHT-NLT-NLC) you can easily see the evolution of the theory.  There are "quantum leaps" within the theory that take you from one place to the other.
While the idea of a fixed, and therefore largely futile, universe is not new, coming up with a model that limits our abilities as far as NLC in a framework which logically fits within observed phenomena and which provides a foundation for quantum phenomena and relativity is an important step.
NLC is not really "provable" in its current state.  There are some huge gaps in the theory, not the least of which is how to get from the simple formula that would provide a smooth universe to a more complicated formula that would explain how a near infinite number of points interact to not only form spirals but to form spirals off of spirals with sufficient aberrations to provide a universe of the complexity of that observed.  I note that "near infinite" is inaccurate since even at this early stage it is fairly easy to define the number of fixed quantum data points in the universe simply using the quantum gravity for each point and the total gravity in the universe.  This is done in the book and while a very large number is theorized, it does provide something far short of infinity.  The fact that it can be easily written using standard scientific notation is one indicator, but there is another, more practical indicator.  Despite the fact that the universe looks really big and really old, if we accept that we're working with a discreet number of data points and that age merely reflects quantum changes in these data points along fixed lines based on simple algorithms, we can see that even the history of our single solar system has as much data as a much larger system.  This concept takes us far down in the theory to a singularity where everything happens at once which is to say a single point or single quantum which when expressed as an algorithm expresses the entire universe, past, present and future.  How is this possible?  The same way that it is easy to have the amount of data in the universe reflected in a much smaller system over a long period of time.
Let me give a very simple example.  If we say that our solar system at a fixed moment has a set amount of quantum data, then over just 100 years (assuming minimum length to be 10-49th seconds) you would have 100X10^49 solar systems at any quantum moment worth of data just in our solar system.  This same idea can be taken back further by saying the life of the sun over 100 years is equal to 100x10^49 suns (of similar size).  Then you can say a single quantum data point over 100 years is equal to 100*10^49 quantum moments.
Since the singularity is free of time, it contains all of the time in the universe.  Hence you can replace the 100 (in the equations above) with a number equal to all the quantum moments in the universe and 49 with zero.  This means that the singularity is given features by the alorithms acting on it that fix its effective display within a set amount of data or, alternatively, that the stuff of g-space allows for infinite generation of data in o-space.
NLC assumes a fixed amount of data and using dual spirals a fixed life span/length of spirals defined fairly simply as the number of quantum moments (lengths) along the spiral to allow half the data to intersect with the other half.  However, the algorthms (see the drawings in the book) suggest that any "terminus" any maximum outward expansion is only limited by the amount of data and that if the amount of data can change then the universe may grow forever, possibly having a separate complete universe for each additional quantum of information added by the singularity.  Despite having all these slightly larger, slightly different universes, however, our particular universe with our particular amount of data would be fixed, unchangeable.
Getting back to the original thrust in this post, the negative sought under the scientific method would be our ability to change any (ANY) quantum data inconsistent with the algorithms.  I read an article recently about the possible acceleration past light speed and (under NLC) the implied altering of algorithms.  This would radically change NLC and, hence, I'm assuming that it's an error or an observational fluke of some sort.  Remember that in a time independent universe like NLC light speed is nothing more than the amount of information necessary to move backwards a quantum moment.  Its the same type of speed limit, but it is a directional speed limit and not a time based speed limit.  It very much shares the same conceptual space as the old stories where someone accelerates past light speed and moves into the past or future.  It cannot happen in a fixed NLC universe, but it may be possible to jump universes.
So let's go back to our model and look at these universes one at a time.
The first one would be 1-1, the second 1,-1,2,-2, the third 1,-1,2-2,3-3, etc.  In these models the 1 reflects the movement along the primary spiral and the -1 reflects movement along the secondary spiral.  2 reflects the second F-series (1+1=2) -2 reflects the opposite spiral along the second F-series (-1-1=-2).  The 3 is 1+2=3, -3 is -1-2=-3 etc.  The total amount of data in each of these mini universes changing fairly rapidly from a beginning to an end (see the book for more on intersecting spirals beginning and ending).  Once you get to a universe like ours where the total amount of quantum data is along the lines of 10^100 quantum data points you have a pretty significant universe even at one point.  These points have begun to stack so that the algorithms expressing the data look (over time) like spirals off of spirals off of spirals.
This presents two alternatives.  The less likely is that each of these separate spiral universes is on data point greater than the other which would make them essentially identical.  The suggested result is much different.  The next universe is almost twice as large as ours (the math is actually set out in the book, but its essentially just a reflection of what the next number in each F-series is 0,1,1, 2, 3,5,8, etc.).
If one of these universes at even one spiral can connect to the next, or if at every common point (in the case of the 5-8 spiral 5 of the 8 points would overlap, in the 3-5 3 of the points would overlap, etc) overlapped in some fashion, this would provide a mechanism for each successive universe being a part of the prior one and would provide a good foundation for the richness in dimension that we observe in our universe at the ct5-ct6 state that we happen to be in.  It also implies that each universe is both shrinking back to singularity and almost doubling in size at the same time (or looking in the other direction expanding out to the maximum amount of dispersion and shrinking by half at the same time).
The beauty of the model of spontaneous generation is that while mind-numbingly complex, it is also pretty simple, a self generating system.
To prove the model, you'd have to find where the jumps between our universe the prior f-series universe and next f-series universe occur.
Even so, however, the model is fixed.  The method of generation allows for specific properties of physics (albeit properties that would change at the various turns of linear spirals if those hold) driven by underlying algorithms.
If the model were "proved" further scientific inquiry would be irrelevant. It's not that it would not provide utility, entertainment, etc, it would merely not be relevant in the grand scheme of things.  A race sufficiently advanced to prove to complete satisfaction the existence of NLC would have no ability to change its universe and would lose the incentive to advance further.  Moreover, the members of such a race would lean towards lives that were as full as possible without reference to the future, because all that would matter was to have the fixed points as pleasant as possible, pleasantness into eternity as opposed to the alternatives.
Fortunately, for us, our very scientific method mirrors (as do all other things, necessarily in a fixed intersecting linear spiral universe) the uncertainty of infinite series (which we now know can be solved for any one universe in this model) in that theories are only good until disproved.
https://www.youtube.com/watch?v=fXjzOpz4Cyw

No comments:

Post a Comment