Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Almighty Buck Science

Researchers Unable To Replicate Findings of Published Economics Studies (businessinsider.com) 213

An anonymous reader writes: Federal Reserve economists Andrew Chang and Phillip Li looked at 67 papers in 13 reputable academic journals. Their findings were shocking. Without the help of the authors, only a third of the results could be independently replicated. Even with the author's help, only about half, or 49%, could. Business Insider reports: "It's a pretty massive issue for economics, especially given the impact that the subject has on public policy. Li and Chang use a well-known paper by Carmen Reinhart and Ken Rogoff as an example. The study showed a significant growth drop-off once a country's national debts reached 90% of gross domestic product, but three years after being published the study was found to contain a significant Microsoft Excel error that changed the magnitude of the effect." With cancer studies and most recently psychology studies all having replication trouble, these economics papers have some company.
This discussion has been archived. No new comments can be posted.

Researchers Unable To Replicate Findings of Published Economics Studies

Comments Filter:
  • by h33t l4x0r ( 4107715 ) on Wednesday October 07, 2015 @06:06PM (#50682495)
    That sounds about right.
    • Re: (Score:3, Interesting)

      Good one. But I always assume in situations such as "Economics Research" that there are all sorts of incentives to cook the books so to speak. Writing what policy makers want to hear will get you hired on as a White House advisor, where you can join the economic team that uses the flawed data to implement horrible economic plans of the type that the Republicans and DLC Democrats have been shoving down our throats since Reagan. So I am not entirely sure that those were just "honest mistakes".

      • by ShanghaiBill ( 739463 ) on Wednesday October 07, 2015 @07:31PM (#50682979)

        in situations such as "Economics Research" that there are all sorts of incentives to cook the book

        This is not unique to economics. Most scientific fields have problems with replication. Journals are strongly biased toward publishing positive results, and nobody gets tenure for negative results or replication. I believe the last Nobel Prize for a failed experiment was Albert Michelson [wikipedia.org] in 1907. There are strong incentives to cheat, or at least cut corners.

        • by Roger W Moore ( 538166 ) on Thursday October 08, 2015 @01:24AM (#50684117) Journal

          This is not unique to economics. Most scientific fields have problems with replication. Journals are strongly biased toward publishing positive results, and nobody gets tenure for negative results or replication.

          Economics is not a scientific field and the fields which seems to have the most problems with this seem to be medical, not scientific ones and "nobody gets tenure for negative results" is simply not true because I did! Indeed it is common in particle physics where we search for evidence of new physics beyond the Standard Model and, with only one exception so far, keep coming up empty handed. As for the most recent Nobel for a "failed" experiment try the one of two days ago: this was awarded to two experiments which failed to show that the Standard Model description of neutrinos was correct.

          I think your definition of "failed experiment" needs almost completely reversing. Michelson-Morley was a stunning success: it completely destroyed the luminiferous aether model for light. It was not the result that was expected but that does not make it a failure. The same applies to neutrino oscillations. Not getting a result you expect from an experiment is the thing every scientist hopes for it because means that you have learnt something new about the universe which is why these experiments often win Nobel prizes. If anything is a failed experiment it is those that just end up confirming existing theories because you were hoping you might learn something new and instead just ended up confirming what you already knew.

          • by The Real Dr John ( 716876 ) on Thursday October 08, 2015 @06:39AM (#50685007) Homepage

            Couldn't agree more Roger. Economics is not a field of scientific inquiry. It is far more prone to fudging than biochemistry, or my field, neuroscience. Any scientist can manipulate or cherry pick the data, but in the end it will only hurt their reputation and so most experienced researchers go to great lengths to make sure their experimental results are reproducible. I am notorious in the lab for insisting on more repetition of experiments that have worked well a number of times. Nobody likes doing repetitive and tedious experiments but it is part of the job.

            I do however agree that fudging in science may be increasing, but that seems more due to the pressures of scarce funding. If science were funded as well as the military is, there would be no temptation for fudging on anyone's part.

      • I'm pretty sure most of these 'economic research' papers were used by industry to indicate to the government that the economy will do much better if the government gives them a bunch of money.

      • Ummm...so "settled economics" is to policy and politics as "settled science" is to policy and politics?
      • So I am not entirely sure that those were just "honest mistakes".

        We can be a bit charitable and say that once they got the results they wanted/expected, they weren't as rigorous at checking the numbers as they should have been. (By contrast, I would expect that if the mistake had disproven their theory, they *would* have found that spreadsheet error.)

        I do kind of feel for those guys - that's got to be embarassing to find out that you've made your bones on a typo.

    • That sounds about right.

      Well, considering that as revealed by the OOXML ISO specifications there are quite a few functions in Excel - like FLOOR() and CEILING() - whose behavior is not the defined mathematical behavior...yes, to a certain degree Microsoft is culpable since it is a reasonable expectation that these functions should work as defined by mathematics and the result is subtle in its effect. (For instance, FLOOR(-2.5) in Excel will yield a result of -2 instead of -3.)

      However, I would expect that someone that deals with

  • by riverat1 ( 1048260 ) on Wednesday October 07, 2015 @06:08PM (#50682509)

    Economics has always been one of the least predictive of "sciences". Economists with an ideological bent make things up with no relationship to the real world and people believe them.

    • by thesupraman ( 179040 ) on Wednesday October 07, 2015 @06:18PM (#50682569)

      Not even close.

      Look at mental health/psychology for a start.
      That is just a joke when it comes to scientific method.

      I suspect this is more a case of follow the money than actual bad science.. politics...

      • The problem with economics isn't the math or the scientific method. The problem is, at its lowest level what happens in economics is based entirely on how all the individual participants in the economy think and act. That is, if your economy were based on a population of deterministic robots whose "decisions" could be predictably be quantified and modeled, then economics would probably by the purest science right after math [xkcd.com].

        But because the economic actors are humans, many of them with wildly unpredicta
    • Re: (Score:2, Insightful)

      Economics is a "science" in the same way that a pickle is "candy".

      The best that can be done is to make some generalized guesses based on hazy metrics and barely-understood past events.

      • by AthanasiusKircher ( 1333179 ) on Wednesday October 07, 2015 @07:22PM (#50682931)

        Economics is a "science" in the same way that a pickle is "candy".

        The best that can be done is to make some generalized guesses based on hazy metrics and barely-understood past events.

        While that may be true, that is not the problem with these studies.

        TFS is misleading here in referencing the problems in other disciplines, because there the replication problems often had to do with other scientists running an empirical experiment, collecting new data, and seeing whether the same trend occurs.

        In THIS study, the "replication" problems were solely due to insufficient documentation. The categories for reasons that studies could not be replicated were listed as: (1) missing public data or code, (2) incorrect public data or code, (3) missing software, or (4) proprietary data.

        There were no empirical "experiments" re-tested here. All they did was try to replicate data analysis, and "replication failed" when they couldn't access the relevant datasets or analytical software.

        This is still a significant problem in economics, but the failure in "scientific" methodology here was of a VERY different kind -- it just had to do with access to research tools, not new data that conflicted with previous findings.

        • by AthanasiusKircher ( 1333179 ) on Wednesday October 07, 2015 @07:42PM (#50683043)

          I just realized my post was slightly unclear -- the vast majority of "unsuccessful replication" had to do with problems with public access to data or data analysis methods.

          But there were also about 1/4 of those studies which "failed replication" which failed due to errors in the data or the analysis of that data. In any case, no new data was collected here -- but "failure" here was mostly about lack of access.

          (Of course, whether full access would have allowed successful replication is another question. Regardless, they weren't actually collecting new empirical data and "running experiments" again in the usual scientific sense.)

    • Economics has always been one of the least predictive of "sciences". Economists with an ideological bent make things up with no relationship to the real world and people believe them.

      To be specific, politicians and pundits with a similar ideological bent believe them, and regular people just go along with whatever their favorite politicians and pundits tell them. Dismal indeed.

      How hard could it be to design real-world economic experiments that actually yield useful and reproducible results? But it seems like everyone in the field already has an agenda, and the tendency is to only submit papers that seem to back up their own personal pet theories. This can be a problem in any disciplin

      • by Chirs ( 87576 ) on Wednesday October 07, 2015 @06:45PM (#50682729)

        How hard could it be to design real-world economic experiments that actually yield useful and reproducible results?

        I'm guessing that most countries wouldn't want to be the subject of a real-world economic experiment...

        • by chipschap ( 1444407 ) on Wednesday October 07, 2015 @07:31PM (#50682977)

          I'm guessing that most countries wouldn't want to be the subject of a real-world economic experiment...

          Except that most countries are the subject of such experiments ... it's just that they are called "policy."

        • I'm guessing that most countries wouldn't want to be the subject of a real-world economic experiment...

          I'm intimately aware of one on the US state level: Brownbackistan. It's being transformed into a Koch Brothers Randian paradise (i.e., hell on earth for the 99%).

          To their credit, they have proven that by cutting taxes and gutting the state's regulatory, financial, and educational systems, you can indeed end up with far, far fewer real jobs and a demonstrably lower standard of living for those that can't

      • by HiThere ( 15173 )

        It would be *extremely* difficult to run controlled experiments in economics. For one thing, just try to find two identical populations to run your test on. There are *WAY* too many plausible variables.

        N.B.: That doesn't mean I don't think that most economists are politicians grinding an axe. It means that the ones who are trying to do decent economics studies would have a really hard problem even if they weren't being drowned out in noise that's essentially theological.

        • "It would be *extremely* difficult to run controlled experiments in economics."

          No, it is extremely simple. In fact, it is so simple that every new government tries theirs as soon as it gets in charge: they call them policy. What you can't count on is controlled *repeatable* experiments. But, you see, theories doesn't need them either.

          The problem is one of willness. Problem is those governments don't really want to conduct in proper way those experiments nor learn from the outputs and, much less, that ci

          • by swb ( 14022 )

            Didn't you just contradict yourself?

            If policy is an attempted experiment and governments always fail to conduct properly controlled experiments, doesn't that end up meaning that it actually is difficult to run controlled experiments in economics? Calling it a problem of will is about as much hand-waving as Keynes' animal spirits.

            Generally speaking, I can see where you might be able to run very simple controlled experiments, like taking a hot dog cart to different corners in a city and see how geography aff

            • "If policy is an attempted experiment and governments always fail to conduct properly controlled experiments, doesn't that end up meaning that it actually is difficult to run controlled experiments in economics?"

              No, it isn't an attempted experiment, it *is* an experiment. It's only the ones conducting it don't want to clearly state their expectancies, neither note down the results nor publish them to public scrutiny. See, you may say it's very difficult for you to leave the chair in front of your computer

            • Even the person in the hot dog stand selling them could make a difference. Some people are naturally better at sales than others.

    • by guacamole ( 24270 ) on Wednesday October 07, 2015 @07:20PM (#50682917)

      Most of academic economists are non-partisan. Or if they are, they normally have pretty solid theory and numbers to back their claims. The economists who do give the study of economics a bad reputation are those who actually make very opinionated and authoritative statements and reports, but hardly publish anything in a peer reviewed academic journal, where such antics wouldn't pass. Unfortunately, there are a few well known economists who had built a solid reputation in the research circles years ago, and then moved on past producing publishable academic research into the realm of partisanship. Any economist knows who they are. There are some well known liberals and conservatives among those.

      • Most of academic economists are non-partisan.

        I lol'ed. How many of those idiots still believe in infinite growth in a finite world?

        • How many people believe in answering a troll question that doesn't make any sense? Give a reference for your straw man argument please.

          • The burden of proof lies on you. You're the one claiming that economy is a science and that at least some economists have pretty solid theories and numbers.
            No matter how many PhDs and fake Nobel prizes they give themselves, they're still idiots thinking that basic physics laws don't apply to their world.

    • by Livius ( 318358 )

      I'm shocked that anyone was trying to reproduce an economics result. I genuinely had no idea they had ever tried.

    • The problem with economics is that is is the study of human interaction. There is no way to predict what will actually occur. The best you can do is derive general statements from first principles. The problem with this is the conclusions you reach tell politicians to do very little and the more they try to help the more they will hurt. This obviously isn't popular with politicians so they get people from schools that like to play with data fitting and pretend they can predict what the outcome will be when

    • I tend to think of economics more as religion than science.
    • by JBMcB ( 73720 )

      It's extremely predictive. Given a limited set of variables, basic economic theory works pretty well - hedge funds bartering for futures or Maori tribes bartering for food.

      There are, of course, a lot of exceptions. Those exceptions are the study of economics.

    • by IAN ( 30 ) on Thursday October 08, 2015 @01:06AM (#50684049)

      Economists with an ideological bent make things up with no relationship to the real world and people believe them.

      It's an old, but relevant, joke:

      The First Law of Economics: For every economist, there exists an equal and opposite economist.

      The Second Law of Economics: They're both wrong.

    • by dywolf ( 2673597 )

      economics isn't a science like mathematics or physics or chemistry, which contain hard empirical facts infinitely repeatable.

      its a behavioral science, like sociology. and human behavior is extremely unreliable and difficult to predict/replicate with certainty.

    • Funny, my economic theories predict and explain everything pretty perfectly. Granted, I don't try to predict the stock market or the rise of new nations with economics; you wouldn't use a blowtorch to drive a screw, either.

      Modern economic theories are largely stoneage garbage. I dispensed with the term "value" because I decided it didn't have a place in civilized economics; after a while, I started researching economics (because I wrote my theories in a vacuum, having never studied economics myself, and

  • "If all the economists were laid end to end, they'd never reach a conclusion." - George Bernard Shaw

  • You took all the economists in the world, and laid them end to end, they'd point in different directions.

  • by Dorianny ( 1847922 ) on Wednesday October 07, 2015 @06:23PM (#50682593) Journal
    Isn't errors in the data analysis exactly the sort of issue the peer-review system is supposed to catch?
    • Re:Wait a minute. (Score:5, Interesting)

      by CanadianMacFan ( 1900244 ) on Wednesday October 07, 2015 @07:04PM (#50682819)

      Yes but the peer review system is flawed. Take someone that is pretty high up in their field and they have their research, people to manage, and they are probably at a university so some classes too. Add in all of their day to day home life stuff that needs to be done. And now they are asked to review a paper from someone else for free. Even if they want to do a great job on it they are on a tight deadline from the publisher. So how thorough of a job do you think that they can do? Can they try to replicate the experiment? Look for an error in an Excel spreadsheet?

      It's like what a lot of software shops I've seen do with their testing. The development team races to finish it and gets the application done the day before release and hands it off to QA. Then everyone wonders why the program is full of errors in production.

      If you want a better peer review system you are going to have to give the reviewers more time and pay them for it.

    • by HiThere ( 15173 )

      Well, it's one of the sorts of errors peer-review is supposed to catch. And I guess that in this case it eventually did. Compare this to free-market idealism which is also supposed to be caught (there is no free market, there never has been a free market, and it wouldn't even be meta-stable if it existed for an hour somewhere). But this is wiggled around by using terms that are not subject to observation or test. Peer-review is supposed to catch this kind of thing too, but it hasn't in the last 200 year

  • by Anonymous Coward on Wednesday October 07, 2015 @06:24PM (#50682599)

    I did a brief course in cognitive psychology during my masters. The course was given by a fairly well known name in the field - an editor of one of the standard texts.

    He specifically told us that we were to do 'anything we liked' to get our data to say what we wanted. He told us that it was vastly more important to publish and defend than not and get sacked. Very much a "it's easier to ask forgiveness than it is to get permission" sort of atmosphere.

    It wouldn't surprise me that this sort of attitude is rampant in other areas of 'science'.

    Only the hard sciences seem to have any real legitimacy and even then I wouldn't trust a biologist all that much.

    The level of trust I'd give to any statement by someone working in a given area is directly proportional to the area's 'purity': http://xkcd.com/435/

    • I don't trust physics. How could you trust a science that solves a problem by just throwing "dark" in front of the name?

      • by HiThere ( 15173 )

        Well, you are right not to trust the physics you get from the newspaper. Given your comment, I presume that's where you've been getting your physics.

        • Actually it was play on a joke I heard on the Infinite Monkey Cage podcast made by a chemist.

    • by pz ( 113803 ) on Wednesday October 07, 2015 @10:49PM (#50683737) Journal

      Only the hard sciences seem to have any real legitimacy and even then I wouldn't trust a biologist all that much.

      I got started in life as an Engineer (3rd or 4th generation, as far as I can tell), and became a Biologist. One of the first things that shocked me is the notion of noise. In Electrical Engineering, noise is well-managed and understood. When you say you have a good fit to your data, it means errors of less than 1%. In Biology, there's so much noise and inherent variability that when you say you have a good fit, it means errors of less than 50%.

      There are very few biological processes we understand well enough to say that we really, deeply understand them. Unlike, say, a transistor.

  • ...or the entire sweater of our existence will come unraveled.

  • by Anonymous Coward on Wednesday October 07, 2015 @06:34PM (#50682657)

    This paper just finds that in many cases when journals require that replication files be posted, they aren't. Of the half of studies that aren't "replicated", the majority are due to the fact that replication files simply aren't available. It's not like these people read the papers, got the data, and reran the analysis on their own. All this paper is saying is that posted replication files often either don't exist or don't work. Their work doesn't show that the results can't be replicated, just that they can't be replicated from public code.

  • by supernova87a ( 532540 ) <kepler1.hotmail@com> on Wednesday October 07, 2015 @06:36PM (#50682669)
    Issues like this were already being flagged in 2013:

    http://www.nytimes.com/2013/04/19/opinion/krugman-the-excel-depression.html
    http://www.washingtonpost.com/news/wonkblog/wp/2013/04/16/is-the-best-evidence-for-austerity-based-on-an-excel-spreadsheet-error/

    First of all, shame on authors for either not checking their models enough, not asking others to check them, and not opening their models for others to see before publishing "important" results.

    Secondly, and perhaps more importantly, shame on the rest of us (and especially policymakers) for relying on such kinds of work so quickly and without validation to support generally political agendas. It's almost the equivalent of funding vaccine-skeptic studies by choosing which doctors will speak in your favor without regard to a rigorous scientific review process.
  • Well duh (Score:2, Interesting)

    by Anonymous Coward

    I majored in Economics, an area of study with a huge problem with relying on mathematical formula without worrying about how accurately the formula reproduce reality. I'm honestly surprised even half are reproducible. The honest opinion of many professors therein is "pfft, the numbers agree with themselves perfectly! That's all that matters."

  • by guacamole ( 24270 ) on Wednesday October 07, 2015 @07:11PM (#50682863)

    The solution is pretty simple. Every author must reveal the codes that were used to produce the results.

    However, one interesting issue is that once every author reveals the codes, you could find out that half of them code in MATLAB with a proficiency comparable to a 3rd grader who just learned BASIC programming up to about the "goto" statement. Not only there is lots of spaghetti code, but it may also contain serious errors that may filter through into the papers. Hence, I suspect a lot of people will not be happy to reveal their codes.

  • Current economic theory could well substitute for some primitive magical cult. Hence such monstrosities such as the Black–Scholes [theguardian.com] model (a rehashed nuclear physics formula). The magic formula that caused the global economic meltdown. You might as well wave a dead chicken at your Quotron/Bloomberg terminal.

    Liar's Poker [amazon.com] by Michael Lewis
  • by geekmux ( 1040042 ) on Wednesday October 07, 2015 @09:31PM (#50683465)

    "...With cancer studies and most recently psychology studies all having replication trouble, these economics papers have some company."

    Unless we're actually going to institute some sort of reform when it comes to peer review and documented result validation, the only "company" these papers will be in is an endless sea of ignorance that assumes it won't keep happening over and over again.

    There is far too much money to be made in simply publishing papers (a.k.a. bullshit) to actually validate results. Therefore, a whole new breed of fucking liars (sorry, no other words suffice) has been manipulating policy for quite a long time now.

    Now society, the onus is on you. What are you going to do about this issue? Sit around and wait for it to happen again? That's what you did the last dozen times.

    • You are correct, the scientific validity of a paper does not rest in where it is published or who has reviewed it... but simply in its repeatability.

      It should probably be made part of degree progression (somewhere between honors and doctorate) to show or not show the repeatability of existing papers... and only those papers that are shown to be repeatable become part of the accepted knowledge, the unrepeated papers to be considered speculative at best, and the unrepeatable to be abandoned.

  • by awol ( 98751 ) on Wednesday October 07, 2015 @10:47PM (#50683725) Journal

    I am an econometrician (well sort of), which is probably worse, but at least we know that. But economics, independent of any data set availability or actual method problems, is broadly handicapped by the generally unobservable nature of the actual data that would enable the verification (or refutation) of a hypothesis. That is, much of the data is quite noisy with many variables mixed in with each other, and as such a big part of the work is trying to determine the extent to which the data itself is a useful measure of the thing being tested. Sometimes getting to a useful dataset is dependent on some awkward assumptions. As such, one of the biggest faults of Economic Theory is assuming a can opener (https://en.wikipedia.org/wiki/Assume_a_can_opener).

  • by Tony Isaac ( 1301187 ) on Wednesday October 07, 2015 @10:51PM (#50683757) Homepage

    For most studies of any kind, the margin for error is around +/-3%. For example, a study covering the United States population using a sample size of 1000 will yield a margin of error of 3.1% [americanre...hgroup.com].

    So a study says that texting while driving increases your risk of a fatal crash by 23 times [trw.com]. That sounds like a lot! But hold the phone...the overall rate of traffic fatalities is about 10 per 100,000 people [wikipedia.org], or about 0.01%. Multiply that by 23, and you get 0.23%. A big change, right? But that 0.23% is still well within the margin for error of most any study.

    I'm not saying that texting while driving isn't dangerous. I'm just saying it's a lot harder to prove a link than it would seem.

    Still, people are wowed by big multipliers, and news writers love to tout dramatic statistics, whether the subject of the study is economics, medicine, or traffic safety. But if you understand statistics, you know that most of these studies don't really tell us much. It's no wonder we keep getting contradictory study results!

  • "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." - https://en.wikipedia.org/wiki/... [wikipedia.org]
  • I don't care what anybody says, economics isn't a science, and can't be a science ... because far too much about how you interpret and use economics is determined by how you ideologically believe it should work.

    Economics in large part is bad math, with unfounded assumptions, making hand-waving conclusions about something which happened (or you believe should happen) to explain it according how you need it to be explained to match your world view.

    Economics is not and never can be an objective science.

    • You do realize science is the method by which you investigate not the subject of study.

      You can stick your head in the ground and ignore economics but since economics affects everything in your life, you do best to pay attention to it.
      • I know the scientific method is how you investigate stuff ... I also know economics is pretty much 50% ideology, which means it's wrapping itself up in the claims of being a science while not really being one.

        Yes, economics affects our lives ... and in terms of telling us what has historically happened, it has some uses ... and then it falls to shit in terms of being either predictive of what will happen, or being successful in telling us what we should do to achieve an outcome.

        But as far as being an object

  • Science in which experimental controls are illegal, isn't.

    Experiments conducted on unwilling human subjects isn't ethical.

    All of the social "sciences" are unethical non-sciences.

    The solution?

    Sort proponents of social theories into governments that test them. [sortocracy.org]

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...