Princeton ESP Lab to Close 363
Nico M writes " The New York Times reports on the imminent closure of one of the most controversial research units at an ivy league School. The Princeton Engineering Anomalies Research laboratory is due to close, but not because of pressure from the outside. Lab founder Robert G. Jahn has declared, in the article, that they've essentially collected all the data they're going to. The laboratory has conducted studies on extrasensory perception and telekinesis from its cramped quarters in the basement of the university's engineering building since 1979. Its equipment is aging, its finances dwindling. Jahn points the finger at detractors as well: 'If people don't believe us after all the results we've produced, then they never will.'"
Re:Ahem (Score:5, Informative)
The problems with PEAR (Score:5, Informative)
A PEAR experiment involved a participant attempting to influence a random number generator (essentially) in a pre-specified direction over a large number of trials. Because random events are, by nature, random, you can get streaks that are above or below the mean. If you analyze a large enough sample, these streaks can become statistically significant, even though they're essentially meaningless and practically insignificant -- it's similar to the fact that any deviation from the mean, no matter how small, is statistically significant if you measure the entire population. Additionally, while the probability of any particular streak is low (.5^n is the probability of any number of heads flipped in a row, which gets very small when you talk about enough of them), if you have enough random events, those streaks are pretty much guaranteed to appear.
So, that's the logic of the PEAR data analysis. Collect a huge corpus of random events, look for streaks, then call them statistically significant because of their low base probability of appearance and the fact that they deviated at all from the expected mean. Skeptic magazine has a good discussion of the PEAR lab inanity, and I believe James Randi's commentary addresses it a few times.
The claim that PEAR's research wouldn't be reviewed is probably false, by the way. It's most likely that the papers were rejected from mainstream journals for the very reasons I mentioned earlier, or because the PEAR lab had no theoretical explanation for the "results" they observed. Or, of course, it's because their papers seem rather dubious in their lack of data and explanations of how they've arrived at their stated probability values (which I say from having the experience of reading one in a, how shall we say, less than top tier journal). Additionally, the lab's been extremely difficult with regards to their raw data. Randi, for example, has never been able to get ahold of it.
no. No. and NO ! (Score:5, Informative)
Re:The problems with PEAR (Score:5, Informative)
The whole point of statistics is that some "streaks" are very improbable if they are coming from a really random source. In that sense, if a random number generator displays such a tendency, it is rather probable that it isn't really random. So, yes, the statistical power (ability to discriminate between small differences) increases with huge sample sizes, but a really random source should fail such tests with probability p=0.95 regardless of sample size. That is because the tests ALWAYS compare the sample with one coming from a truly (theoretically) random source. This is the way those things work.
I would also like to remind (not to you, personally) the difference between statistically significant and meaningful. Even if an absurdly small difference can be inferred with certainty, it remains to be seen whether it matters in actual practice. This is a common cause of confusion, especially when medical epidemiological studies demonstrate a .001% reduction in risk for heart attack in those who eat cucumber every day. The .001% may be true, but it doesn't really matter.
P.
Extraordinary evidence is needed (Score:5, Informative)
Well, if you check one of their papers [princeton.edu], you'll find the following sentence, on page 7: "While no statistically significant departures of the variance, skew, kurtosis, or higher moments from the appropriate chance values appear in the overall data, regular patterns of certain finer scale features can be discerned." That's an outright confession of fraud. They are saying they cannot find any evidence if they analyze a statistically significant amount of data, so they pick whatever small sample will suit them. It's as if I threw a coin a million times and said: "Oh look! Here I threw ten heads in sequence!"
Further on, in the next page, they state "Given the correlation of operator intentions with the anomalous mean shifts, it is reasonable to search the data for operator-specific features that might establish some pattern of individual operator contributions to the overall results. Unfortunately, quantitative statistical assessment of these is complicated by the unavoidably wide disparity among the operator database sizes, and by the small signal-to-noise ratio of the raw data,
Of course, if they *did* communicate their results by telepathy, then that would be an extraordinary proof. But what they have published is rather underwhelming, can we assume that if they did have any better results they would have published them?
Re:Ahem (Score:2, Informative)
From what I've seen of their work (and it's not much) they aren't saying "omg were so sykik!", they're saying "here's data that's anomalous and not adequately explained by existing theories". Whether you buy their argument or not, these folks aren't trying to sell snake oil to cure the gout, they're following up on something they find interesting. That's the great thing about science - we let folks go off the reservation. In the end though, it's good to be skeptical of their results, just like we are when we hear about cold fusion.
I will say I'm not betting my laptop on their results. An inability to find peer-reviewed funding streams certainly says that no matter if your hypothesis is right or wrong, you've been unable to articulate your research convincingly. I won't join in the chorus of mockers though - their intent doesn't seem to be deception, so they're doing science some (small?) service.
Re:Also (Score:2, Informative)
Re:Global Consciousness Project (Score:1, Informative)
Re:Extraordinary evidence is needed (Score:4, Informative)
Seriously? Science does make a number of untestable assumptions, without which it would be impossible to conduct. This is true of every kind of inference. The main difference between science and religion is that science claims to be objective.
We know that's hogwash: for example, in the simplest probability model for discrete parameter estimation (for example, and science does things like this all the time but generally without a strong statistical foundation), it's not possible to know anything useful about the parameter without making an assumption that can't be founded on logic alone. (That is, if you try uniform prior and uniform likelihood distributions - the most objective ("maximum entropy") model you can make - your posterior distribution must be uniform.) For continuous parameter estimation, which science concerns itself with more often, you often can't even formulate an objective model...
The collection of results similar to this are called the "No Free Lunch Theorems," which ought to be studied by everybody doing inference instead of just by machine-learning and AI researchers. These are very low-level proofs: there is no philosophy involved, only math.
The claim that the scientific process leads to objective truth is nothing more than axiomatic. Under certain conditions that, as far as we know, are impossible to verify, it may be true.
Not that I'm saying science should be classified as religion, but thinking rigorously about its claims ought to reduce errors in judgment.
Re:Also (Score:3, Informative)
The descriptions I've read of what he considers proper test protocols are quite reasonable. Do you have any actual evidence of them making unreasonable requirements to sink things? Or are you just engaging in FUD?
Looking at the forum on applicants [randi.org], for example, things seem pretty above-board. In addition to the specifics, which seem fine, you can see that Randi often delegates the negotiations to skeptic groups. Are you suggesting they they are all in secret collusion with Randi to drive these people off?
ow if the criteria were set and judged by a neutral third party, then I might have a little more faith in the challenge. But I doubt that would ever happen because JREF would then face the chance (however minute) of actually losing the money and the bragging rights.
Then start your own prize. Don't have a million dollars? That doesn't matter. Randi didn't either. Back in the day, I and a lot of other people signed notes backing the prize. Now it sounds like he has cash in hand. If you put together a prize with criteria that are better than Randi's, you'll do even better. But make sure you include some experts in flimflammery as part of it. A good mix of scientists and magicians is what I'd like to see.
Be careful of cherry picking (Score:2, Informative)
Upon reading the abstract you can quickly see that there were small deviations (7 sigma). While at the same time their pseudorandom source yielded no mean shift.
Essentially it appears as if there is something very small going on here, which should be tested and either confirmed or denied by future research.