CHAPTER 3-05 A Model of Reality and Time - Incrementing Time in Simulations

Author: Tom Campbell. Link to original: http://www.scribd.com/doc/60996035/Thomas-Campbell-My-Big-Toe-Book-1-Awakening-pdf (English).
Tags: science, theory of everything, time, время, всеобщая теория, наука Submitted by kostyazen 11.04.2012. Public material.
Part of "My Big TOE" book by Tom Campbell. We're translating it with author's permission. In this chapter Tom Campbell tells his story of working with Robert Monroe.

Translations of this material:

into Russian: Глава 3-05. Модель реальности и времени. Пошаговое движение времени в симуляциях. Translated in draft, editing and proof-reading required.
Submitted for translation by kostyazen 11.04.2012

Text

To understand reality one must understand time. During the next six chapters, we will use the notations “delta-t” and “DELTA-t” extensively. Because this notation is borrowed from the field of mathematics, it may seem strange to some, but do not be put off by that. This chapter provides an explanation of incrementing time in simulations to the mathematically challenged and introduces a unique perspective on the nature of time to the Big Picture reality challenged. The words “delta” and “DELTA” represent the lower and upper case Greek letter of the same name. They are spelled out to avoid using abstract symbols that might inadvertently trigger mathephobia or other related mental techno-blocks. This is easy – you’ll see.

Traditional mathematical notation places the Greek letter delta next to a variable (some quantity that changes) to represent an increment (small change) in that variable. I use it here because many people are familiar with this notation. If you are not, don’t worry, the concept is simple and explained in detail below. “DELTA-t” and “delta-t” are simply names for two different increments (small chunks) of time.

In an iterative dynamic simulation, such as the calculation of the position of a fired artillery round (or a thrown ball) as a function of time, one starts with the equations of motion (equations giving position as a function of time) and the initial conditions at time t=0. The first time through the computational process loop, one lets t = (delta-t) and then calculates position – next time through t = 2•(delta-t), next time through t = 3•(delta-t), next time through t = 4•(delta-t), and so on. You calculate a new position of the object (artillery round or ball) for each time t, which is one delta-t larger than the previous value of t. Consequently, time, in your calculation of position, progresses forward by increments (small discontinuous jumps) of delta-t.

Your simulation can approximate continuous time, and thus continuous motion, by making the size of delta-t very small. The cumulative sum over the delta-t is called "simulation time" because it drives the dynamic simulation, (as opposed to "real-time" which is what is measured by the clock on the computer room wall).

If the equations were complex enough and delta-t were small enough and the computer slow enough, it could take several hours of real-time to progress the artillery rounds trajectory through only a few seconds of simulation time (perhaps only a few hundred feet of a much longer trajectory) within the computer simulation. On the other hand, relatively simple equations in a very fast computer using a larger delta-t might simulate a one minute long trajectory (such as the trajectory of a large artillery round) in only a few tenths of a second of real-time.

The simulation can be paused any amount of real-time between consecutive time increments and subsequently started up again without disturbing the results of the simulation. These concepts involving notions of dynamic simulation and time will be important later on. It is good to understand them now while the context is simple.

Consider a large simulation that contains smaller simulations within it. For example, let’s imagine a simulated war containing several simultaneous interactive battles. Such a system simulation may contain a subset of code (subroutine) that simulates a specific battle, which has within it another iterative subset of code (lower level subroutine) that simulates a specific armor battalion’s activity within that battle, which has within it yet another iterative subset of code (even lower level subroutine) that simulates an artillery shell fired by that armor battalion. We now have four nested (one inside the other) levels of interdependent subroutines (1-overall war, 2-specific battles, 3-specific battalions, 4-individual artillery trajectories) iterating their way sequentially to produce the simulated war.

The collection of algorithms that describes the interdependencies between levels (specifies the assumptions, initial conditions, tactics, rules of engagement, and describes the performance of each subsystem and component) is said to model or simulate the war. A properly integrated collection of such models, along with the structure of the code that orders and propagates interactive logical events, passes data, collects results, and increments time, defines the overall war game or simulation. This is how computers are used to simulate a set of interactive activities or a dynamic process.

All the interdependent loops are iterated sequentially. Loops that have no interdependency may be incremented in parallel (simultaneously). Each level and each process within each level progresses its own dynamic activity by one delta-t at a time. As activities are completed (decisions made, trajectories flown, damage assessed, troops and equipment moved, and ammunition depleted) information is passed back and forth among the four levels. The results and implications of this information are used to make choices, continue or modify processes, and keep score. Thus, as the simulation time progresses delta-t by delta-t, the simulated war grinds on.

The practical size of delta-t depends on the required accuracy and the speed of the fastest elements in our model. A typical delta-t might be 0.00001 seconds if the precise locations (to within a centimeter) of things such as artillery rounds and missiles were important. In this simulation, delta-t defines the fundamental quantum of simulation-time. If the fastest thing in our war simulation (an ancient war perhaps, or a modern one where all the gadgets don’t work) was a man running through the bushes with a spear, a delta-t of 0.1 second would probably be small enough.

Would simulating an ancient (spear throwing) or slow moving war (requiring a time base of no more than delta-t = 0.1 second) with our modern war simulator (that ticks off time in 0.00001 second increments) be a problem? No: There is no problem as long as we are simulating something that can be adequately modeled with time increments that are equal to or larger than 0.00001 seconds. It would make no sense to simulate the position of a running man every 0.00001 seconds. It might allow for a more detailed and complex model of the man (including blinks, burps, hair motion, progressive cell damage, and twitches), however, a man’s movement within that small of a time increment (0.00001 sec) is completely inconsequential to the war.

What we would most likely do is use our previous coarse or low fidelity man-movement-model and increment it by a single 0.1 second increment every 10000 (104) ticks of our simulation time clock. Our war game’s fundamental quantum of simulation-time is still equal to 0.00001 seconds or 10-5 sec. We are now incrementing our man-model in 0.1 second increments as a subroutine within a larger model (outer loop) that iterates every 0.00001 seconds. This will turn out to be a crucial concept later.

This more efficiently incremented, low-fidelity man-model (it remains detailed enough to define troop movement as a function of terrain) is purposely designed to be only as accurate, and incremented only as often, as it needs to be to serve the purpose that is intended by the overall simulation. Likewise, the loops and subroutines that represent various players in the simulation are provided the dynamic granularity (increment size) that their function requires. The one with the smallest required increment serves as the metronome (the fundamental quantum of time within our simulation) for all the others.

Let’s put this all together into one big war simulation. We will have man-model subroutines that are incremented by one tenth of a second every 104 increments of our fundamental simulation time. We will also have tank-model subroutines that are incremented by a hundredth of a second every 103 ticks of the fundamental clock. We might have aircraft-model subroutines that are incremented by a thousandth of a second every 102 ticks of the fundamental clock and missile subroutines that are incremented every third tick of the fundamental clock, and perhaps a nuclear damage propagation subroutine that will be incremented (evaluated) every tick of the fundamental clock.

In this simulation, the master loop or simulation driver is the one requiring the smallest time increment. The guy in charge (outermost loop) is the one with the smallest time increment. The local time inside the man-model-loop jumps ahead one tenth-second at a time. The low fidelity simulated men who live in that local reality measure a quantum of their time as one-tenth second. From their perspective, real-time accumulates in increments of one-tenth second. One-tenth second is their local time quantum. If they could become sentient and learn to program, they could decide to simulate the growth of their hair and toenails because these functions change slowly compared to their time quantum.

Let’s summarize the most useful ideas that have been generated by this discussion of dynamic simulations. Some of the time loops, subroutines, or dimensions of our big picture (entire war) simulation are iterated at faster rates than others. Apparent real-time is relative to each loop’s perspective or local reality.

Within a given local reality, we can only perceive events which produce effects that are significant over one local time quantum or more, thus rendering the activities of faster loops (smaller time quanta) invisible and incomprehensible. Within the hierarchy of causality (simulations within simulations or dimensions within dimensions), the master or outermost loop that drives everything else is defined as the one with the highest fundamental frequency (highest sampling rate or rate of iteration), which is the same as saying it has the smallest quantum of time.

Bells should be going off in your head relating this discussion of computer simulation to the discussion of AUM’s fundamental frequency and time quantum, and the differing sized quanta of time in NPMR and PMR (Chapter 31, Book 1). Each higher-level of simulation, with its smaller time quantum, represents a larger perspective, a base of authority and control, and collects, processes and synthesizes the activities and results created by its lower level (inner) subordinate simulations. Dynamic timing loops within loops within loops, all interconnected and building one upon the other at various levels of scale. Do you notice the fractal-like characteristics of time loops within your dynamic reality? Do you get a glimmer of how everything is interrelated and interconnected within a generalized dynamic consciousness-evolution fractal ecosystem?

You will see later that what I define as delta-t represents the outer (controlling) loop of our immediate reality within OS. delta-t is used to calculate probable realities and is referred to as simulation time; it is related to time in NPMR. On the other hand, DELTA-t, a larger time increment, is defined as the increment that accumulates our PMR time, our apparent real-time. From a larger perspective, DELTA-t drives a lower level simulation (with its larger time increment) incremented only once every so many ticks of the overall simulation time clock within The Big Computer. Did I infer that you, me, and our entire universe, are simulated beings and objects? Yes: you will see that it is illuminating, consistent, and useful to model our reality that way. Throughout My Big TOE, I have dangled this idea of a simulated reality in front of you; as strange as it may seem, it will make more sense later on.

Recall that in Chapter 31, Book 1, I explained that time is quantized, meaning that time progresses by discrete increments rather than continuously, and that our time is an artificial construct created by AUM to define the space-time part of itself. In this section, you will see how quantized time loops allow us to create our reality, maintain a living history, and make choices that enable us to learn and to grow the quality of our consciousness (evolve our spiritual quality toward satisfying the goals of our existence). A digital reality offers up many interesting attributes.

You will soon see that time is not a fundamental property of the thread that is woven to produce the fabric of reality, but instead is merely a measurement construct, a tool for implementing organization and defining patterns, more related to the action of the machine that does the weaving than to the thread itself. Each cycle of the loom represents another time increment as thread after thread is added to produce the seemingly continuous fabric of our PMR 3D space-time experience.

The action of the machine, the process of weaving, should not be confused with the three-dimensional experience of the space-time fabric it creates. Nor should the fabric be seen to weave itself through some spontaneous mystical or magical process wherein time is created in the present moment without the need of factory or loom or the energy that makes them run, much less the design and purpose of the process. That space-time is spontaneously self-created from nothing – a self-woven, 4D fabric containing all past, present, and future events – is basically what most scientists believe these days because they cannot perceive the higher dimensions where the loom, mill, and the energy that runs them resides. In fact, our present science is based on the fundamental implicit assumption that the loom and mill cannot possibly exist, or be relevant, because they lie beyond our direct objective 3D perception of a 4D space-time.

Thus, today’s scientists have painted themselves into a corner dependent upon reality mysteriously creating itself – a mystical belief-based concept they are greatly dissatisfied with and do not like to talk about – a fundamental failure of scientific understanding long swept under the rug of objective respectability.

On the other hand, the theists are content, as they have been for 10,000 years, to offer up their one pat answer for all situations and occasions: God does it. Meanwhile, the fabric of space-time continues to apparently weave itself out of nothing as we stumble in circles in pursuit of a Big Picture that we can somehow miraculously extract from our little picture. And that is where we are today, ladies and gentlemen, and where we have been for the last forty years since general relativity and quantum mechanics dropped the first shoe (told us in no uncertain terms that our physical reality was delusional). Einstein tried to lead us out of that wilderness with Unified Field Theory but could not find his way through the impenetrable cultural belief fog that obscured every avenue of escape. Belief blindness is as absolute as any blindness.

That a higher dimensional structure may not leave physical footprints is not that difficult to understand. Einstein was correctly looking for nonphysical footprints in his Unified Field Theory where mass was nothing more than higher intensity field strength. His error was looking for continuous fields within the space-time construct. He did not grasp the digital nature of reality – that space and time are quantized – and that space-time itself was only a local phenomenon, a virtual little picture reality dependent upon a more fundamental digital energy field called consciousness. He did not understand the primacy of consciousness as the fundamental substance, energy, or organization underlying existence. Instead of seeing space-time as primordial physics, a set of relationships and definitions, a rule-set, a construct of consciousness, he thought that space-time itself was the fundamental field. Consciousness is fundamental. Space-time is derived from a conscious intent to constrain individuated subsets of consciousness to a specific experience-base which we call PMR.

Even if Einstein had figured that out in the 1950s, his peers would have written him off as having lost it. He would have most likely ended his career in ridicule. Today we are much more familiar with the potential of the digital sciences. Perhaps now, in the twenty-first century, scientists will have the requisite vision to see and understand the paradigm shifts that are required to support a unified theory of reality. Perhaps non-scientists will recognize the Big Picture first and eventually bring the scientists along.

Tracking the nonphysical footprints of a more fundamental reality through the consciousness wilderness is what My Big TOE is all about – and you do not need to be a physicist or mathematician to get it because the details and explanations are not little picture logic puzzles that exist only within PMR. In the Big Picture, a deep understanding of reality is not the exclusive preserve of the scientist and mathematician: That state of affairs, where scientists are the high priests of reality, is a little picture phenomenon only. When it comes to understanding the Big Picture, there are no academic prerequisites. You do not need to wait for the science guys and mathematicians to lead this parade – march on to the beat of your own experience and inner knowing and they will eventually follow your lead.

It is as though the objects in the simulation (local AI Guys within each subroutine) have become sentient but can’t perceive outside their time loop level and consequently, are oblivious to (or vehemently deny) the bigger picture. They live in their limited universe paying homage to the Loop Gods who occasionally provide fresh input data and to whom they offer up the results of their efforts.

© © Tom Campbell. License: All rights reserved, 09-09-2002