Документ взят из кэша поисковой машины. Адрес оригинального документа : http://chronos.msu.ru/old/EREPORTS/GCP.Events.Mar08.prepress.pdf
Дата изменения: Sat Dec 14 12:30:50 2013
Дата индексирования: Fri Feb 28 20:42:55 2014
Кодировка:
GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute

The GCP Event Experiment: Design, Analytical Methods, Results

Peter Bancel* and Roger Nelson** *Global Consciousness Project, Paris, France **Global Consciousness Project, Princeton, NJ, USA Address correspondence to Roger Nelson, rdnelson@princeton.edu

Abstract Studies of anomalous correlations between mind and matter usually focus on participating subjects and isolated target systems. We report on a decade-long experiment which finds that anomalous mind-matter correlations may be a pervasive aspect of reality. The Global Consciousness Project (GCP) measures the output deviation of a global network of physical random number generators (RNG) at the time of major world events. The project hypothesizes that the coherent attention or emotional response of large populations induced by the events will correspond to characteristic deviations of the network output. We describe the motivation and scope of the experiment and the analytical procedures employed to test the hypothesis, and present the results of 236 events accumulated over the first nine years of operation. The cumulative significance across all events favors the hypothesis by more than 4.5 standard deviations. Beyond a test of the basic hypothesis, secondary analyses show that the result is driven by correlations in the RNG network across global distances.

1


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute 1. Introduction In this paper we present an analysis of an ongoing experiment conducted by the Global Conscious Project (GCP). The long-term experiment studies the proposition that subtle deviations in random systems will correlate with periods of intense collective behavior in global populations. The project is motivated by extensive experimental evidence documenting anomalous effects in random number generators (RNGs) under conditions of directed mental intention1,2,3. Such effects have also been described in local field-based studies during group events of a psychologically cohesive nature4,5,6,7. The GCP extends these studies to a global scale by hypothesizing that world events which provoke an emotive or attentional response from large numbers of people will correspond to periods of anomalous deviations in a geographically distributed RNG network. The GCP network consists of over 65 RNGs deployed at fixed sites around the world. Data from the RNGs are sampled once per second and archived via the Internet into a continuously updated database. Tests of the hypothesis are performed by identifying data periods of pre-specified events and applying pre-specified analysis algorithms. After registration of these analysis parameters, the data archive is opened and a z-score (i.e., the standard normal deviation from expectation) is generated from the predetermined algorithm. Over 250 replications of this protocol have been implemented since the project's inception in 1998, and 236 of these meet strict criteria for network stability and correct hypothesis definition. To date, the cumulative score of data deviations during the designated events stands at 4.5 standard deviations (pvalue ~ 3 x 10-6), confirming the general hypothesis to high significance. The purpose of this paper is to present a full description of these results, including an indication of the structure underlying the statistical deviations we find, and to draw some preliminary conclusions about the experiment's implications for parapsychological or psi research. We also correct some misconceptions and incorrect interpretations of the project that have appeared in the popular press and elsewhere8,9. Finally, we intend this paper as a foundation for a series of detailed investigations to extend and illuminate the primary results. Parapsychology developed in the nineteenth century to assess the validity of extraordinary anecdotal claims of anomalous perception. Research focused on the examination of case studies, following standard methods practiced in psychology at the time. The extensive literature, although puzzling and controversial, indicated to many researchers that the case studies could not be wholly explained as delusional episodes or misconstrued psychological projections. With the advent of experimental psychology in the twentieth century, research moved to the laboratory where it gradually attracted the interest of scientists from other fields who brought a range of methods and techniques to the problem. One consequence, which is implicit in the locution "anomalous phenomena" as an alternative designation for psi, has been a growing body of evidence that the variety of phenomena investigated cannot be understood only in psychological terms. Psi phenomena reported in the literature can be framed in two ways: as anomalous perception, by which an individual accesses information inaccessible to the ordinary senses, or as anomalous physical behavior, for which measured deviations from expectation in physical systems remain unexplained by physical laws. Laboratory studies of anomalous perception include

2


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute telepathy, Ganzfeld and remote-viewing experiments10. These often involve a pair of subjects, and a simple conception of positive outcomes is that the anomaly involves an access or sharing of mental contents between the subjects. Similar experiments using a single subject and an external target of some kind suggest that the reach of anomalous perception extends to the environment in a general way. Related studies which monitor physiological responses of subjects indicate that anomalous perception may occur subliminally and need not be accompanied by mentation or conscious reports11,12. Experiments which employ physiological measurements raise the question of how psi phenomena fit with our knowledge of the material world. Considering these studies, we can ask whether psi should be construed as a mental phenomenon, a mind-matter interaction, or a subtle correlation between the physical and the mental. A careful parsing of interpretations leads straight into the controversies current in contemporary debates on consciousness and the mind/body problem. This is an inevitable difficulty for research which deals with phenomena anchored in the non-material or mental domain. Indeed, terms such as "mental", "mind", "information" and even "physical" are laden with often implicit or imprecise assumptions. A related difficulty involves how to formulate the relationship between the dual domains indicated by our experiential distinction of the mental and the physical. Different approaches may describe the relationship in terms of causation, interaction, correlation, or epiphenomena. Each of these carries deep assumptions which are often difficult to explicate fully. Here, we avoid presuppositions about the relation between material and mental domains, and about how our results might be accommodated theoretically. We work from a commonsense view of the distinction between material and mental phenomena, and remain open as to how these concepts may be understood in the light of experimental results. While ontological and epistemological precision is important for the interpretation of data, the adoption of a narrow stance is not required for experimentation, nor is it necessarily desirable at the outset when dealing with such a poorly understood and complex topic as psi. Experiments provide input for models and serve to guide interpretations and shape theory. Accordingly, the GCP experiment aims to test a conjecture which would extend the range of anomalous phenomena currently encompassed by psi research to study an operationally defined "global consciousness". It is, however, premature to regard this as testing a theory of global consciousness. The experiment is motivated by a large body of laboratory evidence documenting the occurrence of anomalous deviations in physical RNGs2,3. These experiments address the second pole of psi research which investigates how psi manifests in the physical domain. The basic experimental design, developed in the 1970's and refined in the ensuing decades, posits that the statistical output of stable, truly random (typically quantum) systems can be altered by the directed intention of human agents. In a typical experiment, a subject will spend some minutes in the presence of a RNG, often while receiving a sensory feedback of the device output, and mentally "intend" to alter or bias the output in some predetermined way. The experiments find small, but significant deviations which accord with the predetermined human intentions. These results are important because they suggest that anomalous signatures of mental activity may be detectable in the material domain by physical devices. Attractive features of the RNG studies are that they deal with calibrated systems whose physical principles are well understood and that the data analysis employs straightforward statistical methods. A disadvantage is 3


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute that the effect sizes are small, even by the scale of parapsychological research. A further complication is that the measured effect sizes vary considerably, both within and across studies. This has lead to a lively debate regarding the role that moderator variables such as subject performance, publication bias, or RNG characteristics may play in assessing the measured effect13. The issue of subject performance is germane to psi research in general and is closely related to how experiments incorporate and interpret subject intention and the use of "targets". An experimental approach that circumvents the issue altogether proposes that deviations in RNG output may correspond to focused mental activity in groups of people blind to the experiment. Field studies in which a portable RNG is sampled during psychologically engaging group events such as ceremonies, prayer meetings and sporting events have measured significant deviations in RNG variance at the time of the events4-7. Too few field RNG studies have been reported to assert a local anomalous field RNG effect with complete confidence. Yet, these experiments suggest a link between anomalous phenomena and mental states which extends beyond intentionality or cognition and removes the primacy of the individual subject. In the simplest interpretation, these experiments suggest that collective mental activity is connected in a deep and general way with the physical environment. This contrasts with traditional thinking in parapsychology for which individual cognition, perception and intention are key elements of psi effects. It suggests that anomalous correspondence between the mental and physical domains is not specific to individuals and locales. Numerous experiments have found positive outcomes while partially relaxing the constraints imposed by intention, subject individuality, and target specificity (including the locality this implies). However, a different picture may emerge if all three constraints are removed simultaneously, and this has not been studied in a systematic way. For example, it is less evident how one can attribute a preferred status to either the mental or the physical domain, as is done in some dualist or reductionist frameworks, when both intention and subject individuality are absent. It also challenges a unified understanding of psi by significantly broadening the variety of anomalous phenomena observed. The GCP extends the RNG design by reducing these constraints as much as possible. The project expands the canonical experiment, in which the focused intention of an individual subject is directed at one RNG, to its most general realization. The individual subject is replaced by large human populations, the single RNG by a synchronous global network, and focused intention is translated into designated periods of collective attention in the population. The experiment then tests for deviations in the network output during the designated times of collective attention. Replicable significant evidence of a correlation between network deviation and collective mental activity would then strengthen the basis for an operational definition of global consciousness. In the next section we discuss the experimental hypothesis and design, and indicate why we prefer a composite hypothesis to a simple one: the hypothesis is framed broadly and testing is implemented in a series of predictions about specific events. The protocol is described, including the analysis algorithms used to determine event statistics. In section three we review the technical aspects of the RNG network and data acquisition as well as the data archive structure, normalization and data vetting procedures. The results of the formal event experi4


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute ment are presented in section four. We show that the highly significant result is due to a broadly distributed effect and is not driven by outliers. An important conclusion is that the effect size per event is small, yielding an average z-score of 0.3 standard deviations. This result, coupled with the clear absence of outliers, means we cannot meaningfully interpret the statistics of individual events, but must rely on composite scores of many events taken together. We argue that the cumulative z-score of the formal experiment is driven by inter-RNG correlations in the network. This is an entirely new result in psi research. The final section discusses these results in light of several interpretational viewpoints and indicates future research directions. 2. Experimental Hypothesis and Design The GCP hypothesis can be stated as follows: Periods of collective emotional or attentional behavior in widely distributed populations will correlate with deviations from expectation in a global network of RNGs. The hypothesis formalizes the guess that a physical system which is a well-defined part of the material world will at times exhibit anomalous behavior correlating with human mental activity. A viable test of the hypothesis requires designation of two elements: 1) an experimental period of mental activity ­ the event, and 2) a specific measure of deviation for the corresponding data from the RNG network. At the outset, we do not know what the determining factors of the experiment will be. We therefore adopt an approach which avoids over-determining the experimental variables at the start, as is suitable for the beginning stages of an exploratory research endeavor. Emotional or attentional engagement on a global scale is taken as the guiding criterion for event designation. It is obvious that mental activity, both collective and individual, is ubiquitous and ongoing in the world. Nevertheless, a qualitative distinction can be made for events which simultaneously focus the attention of many people separated by regional or global distances. It is reasonable to assume that occasions such as New Year 's Eve celebrations or the news report of a major terrorist attack will define global events in this sense, representing identifiably singular instances of synchronous, communal mental activity. The criterion is inclusive in the sense that events with various types of population engagement may be studied in an effort to learn which factors contribute to the hypothesized effect. The experiment is designed to study other potentially influential factors as well. For example, in correspondence with the distributed character of the global events, the RNGs are deployed widely around the globe, which facilitates analyses of the role spatial parameters may play in the effect. The deviation statistics we employ are measures of the RNG network's variance. RNG experiments with intending subjects typically measure deviations of the mean, predicting that a bias of the nominally symmetric (about the mean) RNG output will correlate with the subject's intention. This is a sensible protocol because the stated intention distinguishes between outcomes by attributing a preferred "direction" to the experimental system of subject + RNG. When intention (and hence, direction) is not a component of the experimental design, the natural indicator for RNG deviation is a second-order statistic, and canonical variance statistics offer attractive alternatives. It is worth noting that variance is closely related to entropy

5


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute and that the symmetry of random entropic deviation could be broken if an anomalous coupling to mental phenomena were present. A scenario like this could lend a preferred direction to variance deviations during specified events. The strategy we adopt is to designate a variance measure as the event statistic and to predict a consistent direction of deviation from expectation should an anomalous effect obtain. Specifically, a positive deviation, representing increased variance, is predicted for most of the events. As with the event designation, the criterion for the variance statistic is inclusive. This means that freedom in the choice of the statistic is allowed based on prior experience or an assessment of the event type. The statistic is chosen, of course, prior to examination of the data. The details of the experimental program can be summarized as follows. Data from a global network of RNGs are acquired continuously into a closed archive. An event of global significance is identified by the project. A time period for the event is determined and a variance statistic defined. The event is designated a formal event by entering this information into a hypothesis and prediction registry*. Appendix 1 shows some examples. After the event is registered the data are unpacked from the archive and the test statistic is calculated. The deviation of the test statistic from expectation is converted to an equivalent normal z-score and the score is added to a table of all formal event results. The experiment seeks to determine whether the composite of all event z-scores differs significantly from the null expectation. Because the experiment explores new questions, the general hypothesis allows broad latitude in the selection of event variables and as such has limited explanatory power. This is deemed preferable to arbitrarily adopting narrow selection criteria in the absence of experimental or theoretical guidance. It is implicit in the experimental design that a positive cumulative result will provide not only a degree of confirmation for the hypothesis, but will also identify data sets suitable for further analysis. It should be clear that we do not apply inferential hypothesis testing to the individual events; there is no probability acceptance criterion which is applied to the event z-scores. 3. Network and Data Structure The RNG network has been described previously14,15. Briefly, the network employs researchgrade, commercial random bit generators [Orion, Mindsong, PEAR]16. The devices process quantum-level electronic noise (post-barrier voltage from electron tunneling in diodes or field effect transistors; or Johnson noise) to generate a bit stream with binomial probability of 1/2, at rates of several thousand bits per second. The RNG circuits are third or fourth generation designs, refined in laboratory research, and quality components are specified to ensure stable long-term operation. They are electromagnetically shielded, and to eliminate residual biases that might nevertheless arise from temperature changes, component aging, or other environmental factors, the bit sequence is processed with a logical exclusive-or operation (XOR) against a known p = 0.5 sequence. The Mindsong
Each registry entry is a prediction of a specific response by the network to an event. This should not be * confused with predicting that an event will occur. The entries are made before data are examined, prior to the event if it is known, but afterward for events that are unpredictable, such as terrorist attacks, aviation accidents, or earthquakes. See Appendix 1 for examples showing the range of formal events.

6


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute and PEAR hardware designs include the XOR; it is applied in software to the output of the Orion device. A serial interface is used to connect to the host computer and provide power. The network consists of approximately 65 RNGs deployed at host computers around the world. The hosts run custom software to acquire data from the local RNG and periodically transfer the checksummed data over the Internet to the project archive in Princeton, NJ. The network hosts are synchronized via an Internet network time protocol and acquire one timestamped data trial per second from each RNG. The data trials, which are the sum of 200 consecutive XOR'd bits, are collected at the start of each second. Subsequent bits generated during the second are discarded. This is a legacy procedure developed principally at the Princeton Engineering Anomalies Research laboratory17, which provides additional assurance that consecutive trials are independent and reduces the data to a nominal binomial (200, 1/2) distribut i on. The raw data archive is freely available for download at the project website18. It contains a continuous record of trials from all commissioned RNGs since the project's initiation on August 5, 1998. As of January 1, 2008 the database contains over 13 billion trials representing the accumulation of 2.6 terabits of RNG output. The current network deployment is shown in Figure 1.

Figure 1. The global distribution of the GCP network.

The data are examined for errors and stability before analysis. Occasional data errors (typically due to electrical supply or serial port malfunction) are easily recognized as sequences of wildly improbable trial values. RNG stability, which is crucial to the experimental design, is verified by a three-pass procedure. First, all trial values with binomial probability < 10-10 are considered errors and are removed. This pass finds most of the serious data errors while removing few possibly valid trials (~ three out of more than 13 billion). Second, the trials are normalized to zero mean and a variance of one. (The standardization is done separately for

7


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute each RNG's data since the devices typically have small, characteristic variance biases. The biases are most pronounced for the Mindsong RNGs and appear to be due to a slight negative autocorrelation in the bit output. These biases are on the order of one part in 10,000 and are detectable after some months of continuous data output.) Next, fluctuations in both the mean and variance of the standardized data are examined for each RNG. This is done for fixed timescales by a selection of data blockings. Blocks with mean or variance exceeding a cutoff value are tagged for more careful examination. The procedure is run for block lengths of 1 minute, 5 minutes, 1 hour, 4 hours, 1 day, 5 days, 1 month and 3 months. A Poisson testing procedure then decides whether tagged blocks should be masked from analysis. If the excluded data for any RNG exceeds 15 minutes on a given day, the data for the day are removed. The third pass re-calculates the normalizations for the vetted data. The blocking procedure is repeated to assure stability for all RNGs at the various time-scales. In practice, we find the devices are stable over years of operation and the infrequent instances of excluded data are usually traced to hardware failure or software problems. The final normalizations produce approximately standard normal trial values (a binomial kurtosis of 2.99 remains) which can be safely input to analyses. The deployment of RNGs in the GCP network depends on the availability of host sites as well as local conditions, especially Internet infrastructure. The network has grown over time, but the decommissioning of hosts also causes occasional alterations in the geographical deployment. Local interruptions, which are more frequent in locations where Internet or electrical grid stability is poor, result in intermittent null periods for some RNG sites. Figure 2 shows the evolution of commissioned RNGs over time.

Figure 2. The evolution of online RNGs in the network with time.

The data trials are standardized binomial variables. The binomial distribution differs from the normal distribution in its fourth moment and the binomial[200, Ѕ] has a theoretical kurtosis of 2.99, as opposed to 3 for the normal distribution. The kurtosis is not modified by the standardization procedure.



8


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute 4. The Formal Experimental Results The experiment currently comprises 236 formal events spanning a period of over nine years from December, 1998 through January, 2008§. Event durations vary and are typically a day or less with the median length being four hours. The event statistics are calculated using one of three different formulations of variance. Two of these comprise 95% of all events and are differentiated by their blocking schemes. Most of the remaining events are New Year 's Eve celebrations, and use a more elaborate blocking scheme and variance statistic19. The two principal variance statistics can be formulated as follows. Normalized trial values are indexed as matrix elements zt,r where r refers to a RNG in the network and t labels the time in seconds. Sums of the trial values follow a normal distribution to high accuracy, by the central limit theorem and the fact that the individual binomial zt,r's closely approximate a normal distribution to begin with. We can convert any such sum to an approximate standard normal variable as:
Z = 1 N

B


t,r

T,R

z

t,r;B

(4.1)

where R and T are blockings over RNGs and time, respectively, N is the number of terms in the sum, and B is a block index. The variance of ZB with respect to the (zero) theoretical mean is then

2

=


B

Z

2 B

(4.2)

which is approximately chi-squared with B degrees of freedom (the standard formulation of variance would divide the chi-squared quantity by B). This formulation provides a compact expression of different variance statistics in terms of the block parameters T and R. The normalization of ZB by N in Eq. 4.1 removes complications from trial vacancies (nulls) when summing over a block. From re-sampling analysis, we find that for N > 10, these blocked variance statistics are nearly indistinguishable from a theoretical chi-squared distribution on the time scale of the events, and the theoretical distribution can be used for assigning probability levels to the event statistics.

Of 257 events in the prediction registry, twenty-one are excluded from the formal experiment for tech§ nical or methodological reasons. Network instabilities resulted in frequent null trials and potential biases during the first weeks of data acquisition. The network achieved stability in December 1998, and a re-sampling analysis has determined that event z-scores can not be reliably determined for the earlier data. Ten early events registered prior to stable operation have therefore been excluded. An independent review, commissioned by the Project in 2002, identified eleven events with potential methodological errors or ambiguities. These events were excluded after the review was completed and an improved prediction registration procedure was adopted. An extensive reexamination of data integrity and prediction methodology completed prior to publication of this paper confirmed both the earlier event rejections and the acceptance of all registered events since 2003. Excluding the 21 rejected events from the experiment reduces the mean event z-score slightly. The value decreases from 0.33, the mean over all 257 registered events, to 0.30 for the 236 accepted events of the formal experiment.

9


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
Using the blocking formulation, the principal statistics of the formal experiment are the network variance (for which each block includes all RNGs) and the device variance (for which each block contains one RNG only). These statistics are distinguished by the correlations they can detect. The network variance is sensitive to correlations between RNGs ­ a spatial characteristic, and the device variance is sensitive to autocorrelations in the RNG outputs ­ a temporal characteristic. Special cases obtain for the minimal time blocking of one second, which eliminates all temporal correlations. The network variance correlations become purely spatial. The device variance at one-second blocking assigns a single trial to each block and is simply the variance of all trials. This trial variance contributes to both formulations regardless of the time resolution. Table 1 lists the distribution of events among the different statistical recipes employed.

Recipe Network Variance (205) (epoch) Device Variance (22)

New Year's Variance (9)

Table 1 Event Analysis Recipes Time Blocking (secs) 1 60 1 120 600 900 3600 1

Number of Events 186 4 15 1 1 19 1 9

Table 1. The distribution of statistical variance recipes for 236 events. The epoch recipes involve a time-aligned concatenation of multiple event periods. Epoch averaging is applied to punctuated events such as New Year's Eve, which is comprised of successive midnights in different time zones. The New Year's Variance recipe uses epoch blocking evaluated for a complex recipe devised for New Year's Eve celebrations.

The formal experimental result is defined as the aggregate of event z-scores. The event zscores are derived from the signed, one-tailed probability values of the chi-squared variance statistics and are combined as
Z =

Tot

1 N


E E

Z

E

(4.3)

which yields a total z-score with equal weighting for events. We find ZTot = 4.55 (p-value 3 x 10-6, one-tailed). This is the main result of the experiment. It implies that, at significance greater than four standard deviations, the experimental hypothesis obtains for the event data. The event effect size (the mean of the event z-scores) is 0.296, with a 95% confidence interval 10


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
of (0.163, 0.429). Importantly, the result is not driven by outliers. The 10% trimmed mean (removal of 10% of the event z-scores from each end of the z-score distribution) is 0.300 (confidence interval (0.149, 0.452). The absence of outliers can also be seen by inspecting plots of the z-score distribution. Figures 3 and 4 show two visualizations of the distribution: the chronological cumulative sum and the sorted cumulative distribution function.

Figure 3. The summed chronological deviation of 236 event z-scores. Each point represents the contribution from the normal z-score of a single event. The null expectation is zero and the parabola gives a 5% probability envelope for positive deviation. The relatively steady trend shows that the cumulative deviation is broadly distributed among many events and is not due to outlier events.

Figure 4. The plot compares the (sorted) cumulative distribution function of event zscores with the standard normal distribution. The formal result is evidenced as a positive shift of the event z-scores along the horizontal axis. The median z-score is 0.365, as opposed to zero for the null hypothesis.

11


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
The formal experimental result combines diverse event predictions as equally weighted zscores, ZE. As such, it does not provide information about which secondary parameters drive the measured effect. The relative importance of event types, durations, and the different deviation statistics can only be addressed by a further analysis of the event data. However, a number of conclusions can be drawn from the distribution of the event z-scores. First, the event effect size of 0.30 is broadly distributed and small. This means that although an individual event is likely to contribute to the aggregate result, there is not sufficient statistical power to meaningfully interpret a single event statistic and analyses must therefore use large event sets. For example, an idealized model which attributes a uniform population mean of 0.3 to events would require at least 70 events to attain a power of 80% at the 95% confidence level. Second, given that there are no outliers, the small effect size implies that statistical noise will determine the distribution of event z-scores. An immediate consequence is that the distribution shape will be approximately Gaussian. Figure four shows that, aside from a shifted mean value, the event z-scores do have the form of a normal distribution. We find that the z-scores fit reasonably well to a standard normal distribution with mean 0.3, and that tests of the variance, skewness and kurtosis also accord with a normal distribution of variance = 1. Third, the mean event z-score represents a lower bound of the measured effect. The positive aggregate result, which is in accord with the hypothesis, may nevertheless include some events that produce actual negative deviations. In this case, the magnitude of the true event effect size will be larger than our measured value. It is also plausible, given the exploratory nature of the event selection procedure, that a number of events might correspond to data periods with truly null effects . We can develop these conclusions to estimate the fractions of events with null or negative variance deviation via a simple model. As discussed above, the event z-scores distribute approximately normally about the experimental mean. A model for the fractions of positive, negative and null events can be constructed by taking the sum of three standard normal Gaussian distributions with positive, zero and negative means, respectively. The means and optimal weights of the model Gaussians are determined by fitting the model to the distribution of experimental z-scores. Details are described in Appendix 2. The model yields a region of preferred fractional composition which includes potential contributions from both null and negative-going events. We estimate a positive event fraction of 67%, with 16% and 17% for the fractions of null and negative deviation events, respectively. The corresponding means for positive and negative deviation events are 0.56 ± 0.09 and -0.49 ± 0.20 (1-sigma uncertainties). The model thus finds a mean effect size for the positive deviation events that is substantially larger than the average event z-score of 0.30 ± 0.08. A conservative conclusion is that the model provides quantitative evidence for the reasonable supposition that the mean event z-score is a lower bound to the event effect size. It also underscores the possibility that the sign of the variance deviations may be negative for a minority fraction of events.

While the standard prediction is a positive deviation, the exploratory nature of the experiment provides an opportunity to identify event types or categories that may yield null or negative results. If we somehow knew that all events produced positive deviations, then the mean event z-score would be valid estimate of the event effect size. However, since some events may be truly null or anomalously negative-going, the mean z-score estimates only a lower bound on the effect size magnitude.

12


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
The experimental variance statistics measure different kinds of data deviations and we would like to know which of these drive the aggregate result, ZTot. The formal experiment selects a variance recipe depending on the registry specifications for each event. Different variance statistics can be compared by applying them uniformly to the events and calculating the resulting ZTot. The network variance for different time blockings can be written compactly as a function of the network autocorrelation. As expressed in Eq. (4.2), the chi-squared variance statistic for an event is the sum of the squared block z-scores:

2

=


B

Z

2 B

(4.4)

For network blockings of length T, this can be decomposed into a term proportional to the 1second network variance and a term which is a function of the autocorrelation.

2

=

2 0 T + T T

(4.5)

The chi-squared variance is converted to a z-score for the event as:

Z

E

=

Z0 1 + T 2T

3

(T - l )(l )
l

(4.6)

Here, Z0 is the event z-score at 1-second blocking and (l) is the autocorrelation of the 1second z-scores ZB=1 for lag l. The decomposition shows that, for zero autocorrelation (the null behavior), the second term is identically zero and the measured event z-score decreases as the inverse square root of the block length. Figure 5 shows that the aggregate ZTot for the network variance falls within a 1-sigma envelope of the expected theoretical decrease for the case = 0, demonstrating that temporal network autocorrelations are not present in the network-blocked data. This calculation uses 212 events, instead of the full 236. We remove 19 New Year's events and 5 events which have extremely long durations and disproportionate weight, to allow uniform application of the network and device variance recipes.

The autocorrelation formulation yields a more accurate estimate since it averages over all realizations of blocking (e.g., there are N-1 ways to initiate a blocking of block length N).
13


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute

Figure 5. The z-score, ZTot for 212 events using network variance recipes. The aggregate z-score is calculated for time blockings ranging from 1 second to 1 minute. The value of ZTot is largest for 1-second blocking and remains within a 1-sigma envelope of theoretical expectation assuming zero autocorrelation. The plot indicates that the spatial network correlations do not persist beyond the 1-second time scale.

Results for the device variance for a selection of blockings, T, are shown in Figure 6. The device variance has been calculated explicitly for unique blocking choices because trial vacancies (nulls) complicate a more general treatment. (Trial vacancies do not affect the network blocking, which produces non-null data seconds for all events.) The blockings in this illustration extend from one second to the full duration of the event. In all cases, the magnitude of the mean event z-score is less than 0.1, which is within 1.7 standard deviations of expectation. This is far below the formal result, which has a mean z-score of 0.30. The different device variance estimates are not independent; the longer times include all shorter blocks. The dependence is strongest when the block size ratios are small, as is seen for the similar blocking results in the range of 120 to 900 seconds. We conclude that the device variance statistic (or equivalently, RNG autocorrelation) does not deviate significantly over the event data. As seen in Figure 6, the trial variance, with blocking (R=1, T=1), is within a 1-sigma deviation from expectation. It is instructive to examine this minimal blocking more closely. Trial blocking assigns a single trial to each block and is insensitive to correlations between trials. Trial-blocked statistics are thus descriptive of the individual RNG behavior (they give the RNG state probabilities) and provide information on the RNG outputs exclusive of any correlation. Table 2 lists standard descriptive statistics of the event trials. All are consistent with

14


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute

Figure 6. Average device variance z-scores for events, calculated for various block lengths ranging over 1 second to 15 minutes, and for the whole event.

null expectation, which indicates that the outputs of the RNG devices do not deviate from standard normality over the events. In contrast to the intending-subject experiments, we find no anomalous deviations in the individual RNG behavior.

Statistic Mean Variance Skewness Kurtosis

Ev Value 0 .0 0 0 1 4 1 .0 0 0 4 5 0 .0 0 0 8 3 2 .9 9 0 8

Table 2 Trial Statistics ­ 212 Events ent weighting Trial weighting Expect Z Value Expect 0 0 .6 9 0 .0 0 0 0 6 0 1 0 .9 5 0 .9 9 9 9 9 8 1 0 0 .5 1 0 .0 0 0 1 0 3 0 2 .9 9 0.11 2 .9 9 0 0 6 2 .9 9

Z 0 .9 8 -0.02 0 .6 9 0 .1 9

Table 2. Descriptive statistics for the event data. The table lists the measured value, theoretical expectation and deviation from expectation (as a normal z-score) for statistics based on the first four distribution moments. The event-weighted values give the statistic's mean over all the 212 events. That is, the statistic is calculated separately for each event and the resultant values are averaged. Trial-weighted values are calculated for all 274 million trials in the event data combined as a single data set.

These analyses demonstrate that the formal result is driven only by the 1-second network variance, while the RNG state probabilities and autocorrelation conform to expectation. The

15


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
network variance can be decomposed to show its relation to synchronized RNG-RNG correlations. The chi-squared variance, in terms of the RNG trial z-scores, zr,t, is the sum of the trial variance, Var[z], and a summation of trial pair-products.
2

=


t

1 N


r

z r,t

2

(4.7)

=

1 N


t i, j

zi,t z

j,t

(4.8)

=

1 N


i j t



zi,t z j,t + T0Var[z

]

(4.9)

Here, the (i,j) label RNGs and T0 is the length of the event, in seconds. Then,
(N - 1)T0 zi z + T0Var [z

j

]

(4.10)

where the overstrike is an average over all seconds, T0, and the brackets indicate an average over unique pairs of RNGs, which yields
NT0 ri,
j

+ Var [z

]

(4.11)

where we identify the time average of pair-products with the Pearson correlation, ri,j for RNGs i and j, and ri, j with the average of the RNG-RNG correlations over all RNG pairs. The pair-product averages can be ap we have determined that the trial z's >>1. Furthermore, deviations in the correlation term, since the expected 1/ N , proximated by the average of Pearson correlations since follow normal statistics and the event lengths satisfy T0 1-second network variance must be dominated by the fluctuations of Var[z] are relatively small, being of order

2

NT0 ri,

j

+ T0 Var[z

]

(4.12) (4.13)

NT0

2 TN
1/ 2 0

+ T0

2 (T0 N )1

/2

T01 / 2 (1 + O (N

-1 / 2

))

(4.14)

and the measured Var[z] is within a standard deviation of its null expectation. The value of ZTot for the network variance recipe on the 212 event subset is Z = 4.10. This assumes a theoretical zero mean of blocked z-scores (Eq. 4.2). The network variance deviation ZTot can also

16


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
be determined empirically from a random re-sampling analysis on the entire database. Using this resampling-derived expectation yields a value of Z = 4.29, which is comparable to the theoretical value. Details of the resampling procedure are given in Appendix 3. We conclude from the blocking analysis that the aggregate formal result, ZTot, is driven by inter-RNG correlations on a 1-second time scale. This is the second major result of the experiment. It implies that the network deviations are due to anomalous correlations between RNGs separated, on average, by thousands of kilometers. This is, moreover, a new class of result, which could only be observed in a distributed network of RNGs.

5. Conclusions
The Global Consciousness Project seeks evidence for a subtle correlation between deviations in a distributed random system and human mental activity. The event experiment examines the normalized data from a synchronized global network of physical random number generators during periods of widespread collective human attention. We find that, while the data fluctuate near expectation over the 9-year extent of the database, the aggregate deviation of data during 236 registered formal events is significant at 4.5 standard deviations. The highly significant aggregate result rests on a rigorous protocol which determines all event parameters before the data are examined. The result is confirmed empirically by a re-sampling analysis on the full database. We have proposed the correspondence of data deviations with the identified events as an operational definition of global consciousness, and our analysis has shown this to be a productive approach. The present paper will serve as a general background and foundation for a series of detailed assessments of questions stimulated by these results. The average effect is small and broadly distributed among the events. The small event effect size of 0.3 sigma has several consequences. Most notably, the effect is too weak to meaningfully interpret individual events. Single events are dominated by random noise and analytical tests require sets of 50 events or more to achieve a reasonable statistical power. Second, the small effect size permits a simple Gaussian model of the distribution of event zscores. Modeling of the event distribution suggests that roughly two-thirds of the event outcomes correspond to true positive deviations. The model finds that the remainder of the distribution contains both negative and null deviation events. One implication is that our standard prediction of positive variance deviations is not sufficiently sophisticated. Preliminary work addressing the effects from distinct categories of events included in the database reveals substantial differences, and may lead to the identification of categories that tend to produce negative vs. positive deviations, or to produce null effects. Third, the existence of true negative or null events implies that the measured effect size estimates a lower bound on the effect size of positive deviation events. A thorough analysis of the statistics used in the formal experiment shows that the effect is due to deviations in the variance of the network at 1-second resolution. The variance excursions are driven by correlations between or among the geographically separated RNGs in the network and are not due to changes of the output distributions of the individual RNGs. That is, we find no evidence for shifts of the mean or changes in the variance of the devices. This re-

17


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
sult stands in contrast to studies of RNG deviations in intending-subject RNG experiments which typically find shifts in the device mean. It also differs from the device variance excursions found in field research using a single RNG to study group consciousness. The GCP result thus suggests an entirely new phenomenon comprising correlations among globally distributed systems in the mental and physical domains. We remain cautious as to the interpretation of these results. Any possible extension of our operational definition of global consciousness to a theoretical model needs to be assessed against alternative explanations and a careful study of the data's statistical structure. The GCP hypothesis and the physical implementation of the project allow us to ask whether the geographical distributions of engaged populations and the RNG network play an essential role. It will be important to examine the data for dependencies on distance and network density, particularly in light of the finding that the variance deviations arise from RNG-RNG correlations. Indeed, a complete absence of distance structure would obviate any need for a globally deployed network. Other structural parameters to investigate include time ­ the effect of varying the size and placement of the time window defining the event; the relative contributions of individual network nodes; and higher order inter-node correlation statistics. Evidence of structure in these or other parameters would indicate that the network is exhibiting a more complex behavior than is indicated by the simple deviations we have measured. To the extent data structure is found, models need to accommodate the possibility that the network is producing true data anomalies. An alternative explanation is that psi intuition on the part of the experimenters may result in a fortuitous choice of event parameters which favors an anomalously biased selection of the random data. While it is unlikely that experiments can exclude anomalous experimenter effects altogether, for such a selection model, the data would not be expected to have underlying structure. This provides a further motivation for examining the structure of the event data. Several preliminary analyses, which are beyond the scope of this paper, indicate that the question of data versus selection anomalies is amenable to analysis. This preliminary work suggests that data anomalies may indeed be present in the event data. We find that the RNG-RNG correlations driving the network variance decrease with inter-node distance, and we find that deviations in the event data show a characteristic time structure. Structural features like these can be expected if the variance deviations depend on experimental parameters such as network geography and the time characteristics of events. An objection can be made concerning the ubiquitous nature of collective mental activity and the apparently sparse occurrence of what we designate as global events (the events comprise less than 2% of the database). If there is a global consciousness effect, why is it not evident throughout the database? A possibility is that measurable data deviations only correlate with those events which engage the largest numbers of people or evoke the strongest emotional responses. One way to test this idea is by devising a binary classification of event magnitude. A qualitative division of events into sets of major and minor magnitude is reasonably straightforward, given the wide range of events in the registry (ranging from the September 2001 terrorist bombings to Pierre Trudeau's funeral). We find that the subset of major events, which engage very large numbers of people, has greater network variance deviation, with the difference from the minor event set significant at the 0.05 probability level. An explanation for the lack of anomalous deviation in the full database may then rest on the degree of coherence of 18


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute
worldwide mental activity, which might be appreciable only for truly large events. Simultaneous instances of separate collective activities, which are not mutually connected or coherent, may correspond to data with essentially null correlation structure. This paper is intended to succinctly document the Global Consciousness Project event experiment. We have shown that the data are indeed random, with parameters indistinguishable from theoretical predictions when examined as a whole. However, the data segments corresponding to the global events specified in the formal experiment do differ from expectation with high statistical significance, as predicted in the statement of the experimental hypothesis. The deviations we measure are associated with RNG correlations which extend over global distances. This result suggests a subtle and far-reaching interdependence between mind and matter. The implications of these findings for both physical and psychological models seem profound, but much work needs to be done to illuminate the issues. We will proceed with focused analyses addressing questions designed to deepen our understanding of the GCP results.

Acknowledgements
The Global Consciousness Project would not exist except for the immense contributions of Greg Nelson and John Walker, who created the network architecture and the data acquisition software. Paul Bethke ported the software to Windows, thus broadening the network. Dean Radin, Dick Bierman, Dick Shoup, and others contributed ideas and experience. Rick Berger helped to create a comprehensive Web site to make the project available to the public. The project also would not exist but for the commitment of time, resources, and good will from all the network hosts. Financial support comes from individuals including Charles Overby, Tony Cohen, Reinhilde Nelson, Michael Heany, Alexander Imich, Richard Adams, Richard Wallace, Anna Capasso, Michael Breland, Joseph Giove, J. Z. Knight, Hans Wendt, Jim Warren, and major donations from an anonymous contributor. We also gratefully acknowledge donations via PayPal from many individuals. The Institute of Noetic Sciences provides logistical support as a non-profit home for the project, and the Lifebridge Foundation has provided generous support for documentation of the GCP. Finally, there are very many friends of the project whose good will, interest, and empathy open a necessary niche in consciousness space.

19


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute Appendix 1
The wide variety of events included in the formal experiment is a reflection of the GCP's exploratory character. The criterion for event designation, as stated in the experimental hypothesis, is an instance of collective attentional response among people separated by global distances. In order to fully accommodate this criterion, it is necessary to investigate a range of event types. Accordingly, selected events may engage large or small populations, be tragic or celebratory, or depend on either natural or human circumstances. It is evident that with this approach we may identify some event types that do not correspond to periods of global consciousness. However, this possibility underscores the importance of employing an inclusive event selection procedure, as it permits a thorough exploration of the experiment's hypothesis. The following table shows a selection of formal events drawn from the prediction registry.

Table A1.1 A Selection of Formal Events Number
1 21 43 56 80 90 121 122 131 180 181 182 183 184 197 223 255 259

Event Description
US Embassy Bombings Africa Earthquake in Columbia New Year Variance 2000 Pierre Trudeau Funeral Terrorist Attacks Sept 11 2001 World-Wide Meditations Wellstone Plane Crash Chechen Hostage Tragedy Global Peace Demonstrations Athens Olympic Opening Day of Murderous Violence Republican Convention Bush Russian School Hostages Earthdance, 2004 Pope John Paul's Funeral TM Flyer Aggregation Benazir Bhutto Assassination Attacks in Gaza

Begin Date Time
1998-08-07 07:15:00 1999-01-25 17:15:00 1999-12-31 09:30:00 2000-10-03 15:00:00 2001-09-11 12:35:00 2001-11-11 11:00:00 2002-10-25 15:00:00 2002-10-26 02:30:00 2003-02-15 00:00:00 2004-08-13 18:00:00 2004-08-31 00:00:00 2004-09-03 02:09:59 2004-09-03 05:00:00 2004-09-18 22:50:00 2005-04-08 08:00:00 2006-07-29 12:30:00 2007-12-27 11:00:00 2008-03-01 00:00:00

End Date Time
1998-08-07 10:14:59 1999-01-25 21:14:59 2000-01-01 11:29:59 2000-10-03 16:59:59 2001-09-11 16:44:59 2001-11-11 11:14:59 2002-10-25 16:59:59 2002-10-26 04:59:59 2003-02-15 23:59:59 2004-08-13 20:59:59 2004-08-31 23:59:59 2004-09-03 03:11:59 2004-09-03 08:59:59 2004-09-18 23:14:59 2005-04-08 12:29:59 2006-09-09 23:29:59 2007-12-27 18:59:59 2008-03-01 23:59:59

Table A1.1. A sample of the formal events chosen to test the GCP hypothesis.

20


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute Appendix 2
The modeling procedure for estimates of positive, negative and null event fractions uses the sum of three standard normal Gaussian distributions with positive, zero and negative means, respectively. Because the effect size is small and there are no outliers, the variance of each Gaussian is dominated by the null distribution and the model variances can be set to one. A more elaborate approach would accommodate a range of mean values for both the positive and negative deviating fractions, but simulations show that this does not improve the model. This is due to the no-outlier/small-effect condition which constrains the range of positive (negative) model means so that a single Gaussian for each deviation direction is sufficient. The model can be expressed as follows:

G

model

(

A, B,C, µ+ µ- ) =



Ag0 + Bg µ+ + Cg

µ

(A1.1)

where g is a Gaussian distribution function. The model parameters are the fraction coefficients {A,B,C}, which are constrained by A+B+C = 1, and the distribution means of the positive and negative fractions, {µ+,µ-}. Goodness-of-fit tests provide a map of the "compositions" of positive, null and negativegoing event scores compatible with the data, for a range of {µ+,µ-}. We have examined the model over the full span of fraction coefficients and for a wide range of mean values. For all of the 9800 models tested, the experimental z-scores are binned into a unique set of 14 bins which are selected to yield a model expectation > 5 for all bins. A model goodness-of-fit is determined as a chi-squared probability of the mean-squared error, on twelve degrees-offreedom (the composition constraint and an amplitude factor reduce the df by 2). A low fit probability indicates that random measurement error accounts poorly for the fit error and is grounds for rejecting such a model. Since we are interested in the composition fractions, we project the five-parameter results from the 9800 models into composition space. The projection can be represented in a ternary composition diagram as shown in Figure A2.1. The vertices of the triangular diagram are points of pure unitary composition for the positive, negative and null fractions, as labeled in the Figure. Parallel grids are lines of constant composition for the fraction-type facing the grid lines. For example, the horizontal grids are lines of constant null fraction. The shaded contours are lines of constant fit probability, in increasing steps of 5%, from lightest to darkest. The dark contour in the lower left corner, which is the region of best fit, delineates a region with fit probability > 20%, indicating that the model gives an accurate representation of the data for compositions within the contour.

21


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute

Figure A2.1. The composition fractions for best fits of a ternary model of positive, null and negative deviation event z-scores. The model finds that the z-score distribution is best described by a large fraction of positive events and potentially nonzero fractions of both null and negative deviation events. The contours represent fit probabilities in 5% steps, with the maximum contour of 20% indicated by the darkest shading.

The model yields a region of preferred fractional composition which includes potential contributions from both null and negative-going events. Whereas the best fit contour includes in its range a null fraction of zero along the horizontal axis, the fit probabilities decrease sharply as the line of zero negative-going composition (the left edge of the diagram) is approached. This suggests that the experimental z-score distribution does include a minority Roger Nelson Page 22 3/14/2008the average of model parameters for fits with chi-square probabilities exceeding a 15% cutoff with positive event fractions > 50%. This covers most of the lower left portion of the ternary diagram. For this region the positive event fraction is 67%. The fractions of null and negative deviation events are 16% and 17% , respectively (1-sigma uncertainties ±10%). The corresponding average parameter values for {µ+,µ-} are 0.56 ± 0.09 and -0.49 ± 0.20. These are substantially larger in magnitude than the average event z-score of 0.30. As with any analysis based on modeling, the z-score distribution model should be interpreted with caution. The parameter averages are derived from binned fits of a limited number of events and there is some sensitivity to tail occupation. Nevertheless, the average model parameters are robust against changes in bin selection, the probability cutoff level, and small alterations in the z-score tail distributions. The model provides support for the reasonable supposition that the mean event z-score is a lower bound to the event effect size. It emphasizes that the anomalous variance deviations may be negative for some events.

22


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute Appendix 3
We can verify the significance of the 1-second network variance empirically by a re-sampling analysis. The interpretation of the result for 212 events given in the text, ZTot = 4.10, assumes that ZTot is effectively drawn from a standard normal distribution and that its significance thus can be represented by the probability value for a normal z-score of 4.10. The careful preparatory vetting and normalization procedures for all RNG trials, combined with the finding that the trial values for the event data conform to normality, supports this interpretation. The logic here is that ZTot should distribute normally, under the null hypothesis, because the underlying data trials distribute normally. However, it is worthwhile to estimate the significance of ZTot without relying on these assumptions. An empirical distribution for ZTot can be constructed by randomly selecting data periods corresponding to the events from the full database and calculating a set of {ZTot} from the sampled event z-scores, ZE. The significance of the value ZTot = 4.10 for the true event set can then be estimated from the empirical distribution, {ZTot}. The randomly sampled ZE are derived from solving CDF[2] = CDF[ZE] for ZE, where CDF is the cumulative distribution function of the chi-squared and standard normal distributions, respectively. 2 is the re-sampled network variance for the event. As described in the text, the event set uses 212 events, instead of the full 236. We remove 19 New Year's events which are not easily adapted to the network and device variance recipes, as well as 5 events which have extremely long durations. The removals are inconsequential since we are merely comparing recipes and testing our approximations. The re-sampling procedure is done with replacement. We calculate the ZE for all of the 212 events by selecting random start times for each event from within the entire 9-year database. The ZTot is then calculated and the procedure is repeated 100,000 times. The resulting distribution has just one ZTot exceeding the value of 4.10, which gives an empirical probability value for the experimental ZTot of 10-5, or an effective Z = 4.26. A more accurate determination of the P-value requires populating the far tails of the distribution, for which roughly a million iterations are needed, which is at the limit of our computational capacities. As an alternative estimate, the distribution of 100,000 values is tested for standard normality about the distribution mean, which obtains, and the experimental value of 4.10 is then corrected by the mean of the empirical distribution. The resampling distribution mean is 0.23, which is consistent with an apparent negative trend in the database. The empirical estimate of ZTot calculated by this method is Z = 4.33, close to the direct empirical value of 4.26. These estimates give a slight increase in the significance of the experimental result relative to the value based on the theoretical chi-squared statistics. A second approach is to remove the trends of the chi-squared statistics locally about each event and then to use the trend-subtracted values to calculate ZTot from the formal event periods. The trends are estimated by smoothing the database of chi-squared statistics with a Gaussian window ± 7 days about each datum in the event. The window size is chosen to be much larger than the maximum event length of 1 day, but local enough to compensate for trends about the event period. This procedure yields a value of ZTot = 4.29, in close agreement with the re-sampling analysis.

23


GCP Event Experiment, Bancel and Nelson, 3/14/2008, preprint (JSE), Do not distribute References
Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., Bradish, G. J. (1997). Correlations of random binary sequences with pre-stated operator intention: A review of a 12-year program. J. Scientific Exploration, Vol. 11, No. 3, pp. 345-368. Radin, D. I. & Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random physical systems. Foundations of Physics, Vol. 19, No. 12, pp. 1499-1514. Radin, D. I. & Nelson, R. D. (2003). Meta-analysis of mind-matter interaction experiments: 1959-2000. In Jonas, W. & Crawford, C. (Eds.), Healing, Intention and Energy Medicine. London: Harcourt Health Sciences.
4 3 2 1

Bierman, D. J. (1996). Exploring correlations between local emotional and global emotional events and the behavior of a random number generator. J. Scientific Exploration, Vol. 10, No. 3, pp. 363-374. Nelson, R. D., Bradish, G. J., Dobyns, Y. H., Dunne, B. J., Jahn, R. G. (1996). FieldREG anomalies in group situations. J. Scientific Exploration, Vol. 10, No. 1, pp. 111-141.

5

Nelson, R. D., Bradish, G. J., Dobyns, Y. H., Dunne, B. J., Jahn, R. G. (1998). FieldREG II: Consciousness field effects: replications and explorations. J. Scientific Exploration, Vol. 12, No. 3, pp. 425-454. Radin, D. I., Rebman, J. M., Cross, M. P. (1996). Anomalous organization of random events by group consciousness: Two exploratory experiments. J. Scientific Exploration, Vol. 10, No. 1, pp. 143-168. May, E. C. & Spottiswoode, S. J. P. (2001). Global Consciousness Project: An Independent Analysis of the 11 September 2001 Events. http://www.lfr.org/LFR/csl/library/Sep1101.pdf. Red Orbit Online News (2005). Can This Black Box See Into the Future? Posted 11 Feb. 2005, Source: Daily Mail, London (UK). http://www.redorbit.com/news/display/?id=126649
10 11 9 8 7

6

Broughton, R.S. (1991). Parapsychology: The Controversial Science, New York: Ballantine.

Braud, W. and Schlitz, M. (1991). Consciousness interactions with remote biological systems: Anomalous intentionality effects. Subtle Energies 2(1): 1-46. Radin, D. I. (2004). Electrodermal presentiments of future emotions. J. Scientific Exploration, Vol.18, Utts, J. M. (1991). Replication and meta-analysis in Parapsychology. Statistical Science, Vol. 6., No. 4,

12

253-274.
13

363-403. Nelson, R. D. (2001). Correlation of global events with REG data: An Internet-based, nonlocal anomalies experiment. The Journal of Parapsychology, Vol. 65, September 2001, pp. 247-271. Nelson, R. D., Radin, D. I., Shoup, R., & Bancel, P. A. (2002). Correlations of continuous random data with major world events. Foundations of Physics Letters, 15, 6, 537-550.
16 15 14

The Orion RNG is available at http://www.randomnumbergenerator.nl/. The distributor for the Mindsong REG, Mindsong, Inc is no longer in business. See http://noosphere.princeton.edu/reg.html. Three PEAR devices were custom built for the Princeton Engineering Anomalies Research program.

17

Jahn, R. G., Dunne, B. J., Nelson, R. D. (1987). Engineering Anomalies Research. J. Scientific Exploration. Vol. 1, No. 1, pp. 21-50.

Public access to GCP database via the Custom Basket Data Request form at http://noosphere.princeton.edu/data/extract.html
19

18

Nelson, R. D. (2006). Anomalous Structure in GCP Data: A focus on New Year 's Eve. Proceedings of the Parapsychological Association Convention, August, 2006, Stockholm, Sweden.

24


Filename: GCP.Events.Mar08.doc Directory: C:\docs\papers\pabpapers\bancel.nelson.texts Template: C:\Documents and Settings\Roger\Application Data\Microsoft\Templates\Normal.dot Title: 1 Subject: Author: Roger Nelson Keywords: Comments: Creation Date: 3/ 14/ 2008 2: 48 P M Change Number: 5 Last Saved On: 3/ 14/ 2008 3: 24 P M Last Saved By: Roger Nelson Total Editing Time: 3,084 Minutes Last Printed On: 3/ 14/ 2008 3: 25 P M As of Last Complete Printing Number of Pages: 24 Number of Words: 9,879 (approx.) Number of Characters: 56,314 (approx.)