Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education Sci-Fi Science

Princeton ESP Lab to Close 363

Nico M writes " The New York Times reports on the imminent closure of one of the most controversial research units at an ivy league School. The Princeton Engineering Anomalies Research laboratory is due to close, but not because of pressure from the outside. Lab founder Robert G. Jahn has declared, in the article, that they've essentially collected all the data they're going to. The laboratory has conducted studies on extrasensory perception and telekinesis from its cramped quarters in the basement of the university's engineering building since 1979. Its equipment is aging, its finances dwindling. Jahn points the finger at detractors as well: 'If people don't believe us after all the results we've produced, then they never will.'"
This discussion has been archived. No new comments can be posted.

Princeton ESP Lab to Close

Comments Filter:
  • Re:Ahem (Score:5, Informative)

    by BiggerIsBetter ( 682164 ) on Saturday February 10, 2007 @05:51AM (#17961164)
  • by FreelanceWizard ( 889712 ) on Saturday February 10, 2007 @06:03AM (#17961218) Homepage
    The methodology wasn't flawed, so much as the analysis and the conclusions drawn from it.

    A PEAR experiment involved a participant attempting to influence a random number generator (essentially) in a pre-specified direction over a large number of trials. Because random events are, by nature, random, you can get streaks that are above or below the mean. If you analyze a large enough sample, these streaks can become statistically significant, even though they're essentially meaningless and practically insignificant -- it's similar to the fact that any deviation from the mean, no matter how small, is statistically significant if you measure the entire population. Additionally, while the probability of any particular streak is low (.5^n is the probability of any number of heads flipped in a row, which gets very small when you talk about enough of them), if you have enough random events, those streaks are pretty much guaranteed to appear.

    So, that's the logic of the PEAR data analysis. Collect a huge corpus of random events, look for streaks, then call them statistically significant because of their low base probability of appearance and the fact that they deviated at all from the expected mean. Skeptic magazine has a good discussion of the PEAR lab inanity, and I believe James Randi's commentary addresses it a few times.

    The claim that PEAR's research wouldn't be reviewed is probably false, by the way. It's most likely that the papers were rejected from mainstream journals for the very reasons I mentioned earlier, or because the PEAR lab had no theoretical explanation for the "results" they observed. Or, of course, it's because their papers seem rather dubious in their lack of data and explanations of how they've arrived at their stated probability values (which I say from having the experience of reading one in a, how shall we say, less than top tier journal). Additionally, the lab's been extremely difficult with regards to their raw data. Randi, for example, has never been able to get ahold of it.
  • no. No. and NO ! (Score:5, Informative)

    by aepervius ( 535155 ) on Saturday February 10, 2007 @06:37AM (#17961352)
    They simply retrofit the data after the fact. And once you retrofit data you can find ANY EVENT which match as long as your criteria is low enough. There is always some bad stuff going around. Especially that they aren't limited by event size, number of people, or geography !! This is again pseudo science at its best. You want to sway us ? Fine ! Set a level of population impacted, a geography limit, event size, then make bloody prediction. Else what you are doing is no better than taking a random bunch of data and finding correaltion between that data and other event. I bet with the same methodology I could take the price variation of potatoe per tons, take only the cent (fractional aprt) and find a corelation with major earth event. As long as I define event as above I am pretty sure any kind of shit can be retrofitted.
  • by ponos ( 122721 ) on Saturday February 10, 2007 @09:18AM (#17962082)

    The whole point of statistics is that some "streaks" are very improbable if they are coming from a really random source. In that sense, if a random number generator displays such a tendency, it is rather probable that it isn't really random. So, yes, the statistical power (ability to discriminate between small differences) increases with huge sample sizes, but a really random source should fail such tests with probability p=0.95 regardless of sample size. That is because the tests ALWAYS compare the sample with one coming from a truly (theoretically) random source. This is the way those things work.

    I would also like to remind (not to you, personally) the difference between statistically significant and meaningful. Even if an absurdly small difference can be inferred with certainty, it remains to be seen whether it matters in actual practice. This is a common cause of confusion, especially when medical epidemiological studies demonstrate a .001% reduction in risk for heart attack in those who eat cucumber every day. The .001% may be true, but it doesn't really matter.

    P.

  • by mangu ( 126918 ) on Saturday February 10, 2007 @09:40AM (#17962182)
    I have little reason to doubt their methodology


    Well, if you check one of their papers [princeton.edu], you'll find the following sentence, on page 7: "While no statistically significant departures of the variance, skew, kurtosis, or higher moments from the appropriate chance values appear in the overall data, regular patterns of certain finer scale features can be discerned." That's an outright confession of fraud. They are saying they cannot find any evidence if they analyze a statistically significant amount of data, so they pick whatever small sample will suit them. It's as if I threw a coin a million times and said: "Oh look! Here I threw ten heads in sequence!"


    Further on, in the next page, they state "Given the correlation of operator intentions with the anomalous mean shifts, it is reasonable to search the data for operator-specific features that might establish some pattern of individual operator contributions to the overall results. Unfortunately, quantitative statistical assessment of these is complicated by the unavoidably wide disparity among the operator database sizes, and by the small signal-to-noise ratio of the raw data, ...", which means they didn't follow a consistent testing protocol and didn't have a standardized method for training their operators. Basically, they are admitting that any statistical correlation in their data is extremely small (which is what "small signal-to-noise ratio of the raw data" means) and they have no way to check if any positive results aren't attributable to insufficient training of their operators.


    Of course, if they *did* communicate their results by telepathy, then that would be an extraordinary proof. But what they have published is rather underwhelming, can we assume that if they did have any better results they would have published them?

  • Re:Ahem (Score:2, Informative)

    by Atraxen ( 790188 ) on Saturday February 10, 2007 @10:00AM (#17962280)
    Right... because bloggers generally have the background to evaluate science. If I wrote a summary of how nuclear magnetic resonance works (sure, we can slightly bias which direction the poles of an atom's nucleus point with a magnet!) plenty of them would scoff. That's why scientists believe in PEER review - the person reviewing the work should be well enough grounded in the work to have an opinion based in all the nuance of the discipline. That's why it works out sooo well when the Legislative or Executive branch decide to get involved in deciding what 'good science' is (after all, since global warming is only a slight bias in a long-term streak of temperature data, there's no reason to believe in it...)

    From what I've seen of their work (and it's not much) they aren't saying "omg were so sykik!", they're saying "here's data that's anomalous and not adequately explained by existing theories". Whether you buy their argument or not, these folks aren't trying to sell snake oil to cure the gout, they're following up on something they find interesting. That's the great thing about science - we let folks go off the reservation. In the end though, it's good to be skeptical of their results, just like we are when we hear about cold fusion.

    I will say I'm not betting my laptop on their results. An inability to find peer-reviewed funding streams certainly says that no matter if your hypothesis is right or wrong, you've been unable to articulate your research convincingly. I won't join in the chorus of mockers though - their intent doesn't seem to be deception, so they're doing science some (small?) service.
  • Re:Also (Score:2, Informative)

    by Modesitt ( 551306 ) on Saturday February 10, 2007 @10:26AM (#17962450)
    Come back when you've read the FAQ [randi.org].
  • by Anonymous Coward on Saturday February 10, 2007 @10:44AM (#17962586)

    Didn't those two say the same thing about global warming?

    Um, no. Do you read fairy tales or something?
    Bullshit! [wikipedia.org]
  • by grammar fascist ( 239789 ) on Saturday February 10, 2007 @02:08PM (#17964014) Homepage

    Heresy! I have the utmost faith in the scientific method! Don't tell me the persecutions scientific minds suffer for their beliefs are in vain.

    Seriously...

    Seriously? Science does make a number of untestable assumptions, without which it would be impossible to conduct. This is true of every kind of inference. The main difference between science and religion is that science claims to be objective.

    We know that's hogwash: for example, in the simplest probability model for discrete parameter estimation (for example, and science does things like this all the time but generally without a strong statistical foundation), it's not possible to know anything useful about the parameter without making an assumption that can't be founded on logic alone. (That is, if you try uniform prior and uniform likelihood distributions - the most objective ("maximum entropy") model you can make - your posterior distribution must be uniform.) For continuous parameter estimation, which science concerns itself with more often, you often can't even formulate an objective model...

    The collection of results similar to this are called the "No Free Lunch Theorems," which ought to be studied by everybody doing inference instead of just by machine-learning and AI researchers. These are very low-level proofs: there is no philosophy involved, only math.

    The claim that the scientific process leads to objective truth is nothing more than axiomatic. Under certain conditions that, as far as we know, are impossible to verify, it may be true.

    Not that I'm saying science should be classified as religion, but thinking rigorously about its claims ought to reduce errors in judgment.
  • Re:Also (Score:3, Informative)

    by dubl-u ( 51156 ) * <2523987012@noSPAm.pota.to> on Saturday February 10, 2007 @02:19PM (#17964098)
    Randi claims that most applicants never agree to a "proper test protocol", and are never tested. But he also points out that both sides have to agree what that "proper test protocol" is. So either side can basically tank the process by being disagreeable. With a million dollars on the line (not to mention his reputation), you have to believe that Randi has a serious incentive to make sure that nobody passes the test. Apparently the easiest way to do so is to ensure that nobody (or only a very few people) actually gets to take the test.

    The descriptions I've read of what he considers proper test protocols are quite reasonable. Do you have any actual evidence of them making unreasonable requirements to sink things? Or are you just engaging in FUD?

    Looking at the forum on applicants [randi.org], for example, things seem pretty above-board. In addition to the specifics, which seem fine, you can see that Randi often delegates the negotiations to skeptic groups. Are you suggesting they they are all in secret collusion with Randi to drive these people off?

    ow if the criteria were set and judged by a neutral third party, then I might have a little more faith in the challenge. But I doubt that would ever happen because JREF would then face the chance (however minute) of actually losing the money and the bragging rights.

    Then start your own prize. Don't have a million dollars? That doesn't matter. Randi didn't either. Back in the day, I and a lot of other people signed notes backing the prize. Now it sounds like he has cash in hand. If you put together a prize with criteria that are better than Randi's, you'll do even better. But make sure you include some experts in flimflammery as part of it. A good mix of scientists and magicians is what I'd like to see.
  • by yderf ( 764618 ) on Sunday February 11, 2007 @03:24AM (#17969986)
    If on page 7 you continue to read the "finer scale features" you will see that there are other deviations, in particular you will see that the mean shifted from .05 to (.5 + epsilon_mu).

    Upon reading the abstract you can quickly see that there were small deviations (7 sigma). While at the same time their pseudorandom source yielded no mean shift.

    Essentially it appears as if there is something very small going on here, which should be tested and either confirmed or denied by future research.

All your files have been destroyed (sorry). Paul.

Working...