Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Media (Apple) Media Businesses Music The Almighty Buck Apple

iTunes Sales Not 'Collapsing' After All 122

john82 writes "Earlier this month we had a report from Forrester, based on a random sampling of 2,000 credit card accounts, that purported to show that iTunes sales were crashing. Now comes another survey from Reston, VA-based ComScore which indicates the exact opposite. ComScore's report which is based on actual iTunes sales shows a 84% increase during the first nine months of this year compared to the same period last year. Meanwhile the author of the Forrester report, Josh Bernoff, noted in his blog yesterday that they shouldn't be pummeled just because everyone took what he wrote and ran with it."
This discussion has been archived. No new comments can be posted.

iTunes Sales Not 'Collapsing' After All

Comments Filter:
  • by BWJones ( 18351 ) * on Thursday December 14, 2006 @10:16PM (#17248488) Homepage Journal
    Meanwhile the author of the Forrester report, Josh Bernoff, noted in his blog yesterday that they shouldn't be pummeled just because everyone took what he wrote and ran with it."

    Well, that is why people should be responsible for their reporting. In my business, when you report something, you stand by it. If you present data or a theory with the suspicion that it is incorrect, that is fraud in my line of work. Seriously though, did you *really* think that a sample size of just over 1000 purchases on credit cards obtained through a back channel source is a reliable sample size for the number of iTunes purchases? If I correctly recall, Apple announced back in February that they were selling about 3 million songs/day and if the current estimates of increases on the order of 84% are correct, your sample size is woefully under-representative. Thats just high school statistics by the way...

    I am not saying that you should lose your job over this one, but this should be a tacit reminder of how important good reporting is and if you are beyond your means or competence on a particular story or analysis, go find some help before you publish it, do some fact checking and be more careful with stories that can have a significant impact on companies and individuals.

    • by Elsan ( 914644 ) on Thursday December 14, 2006 @10:23PM (#17248558)
      I stand by this man as long as he isn't proven wrong.
    • by RichPowers ( 998637 ) on Thursday December 14, 2006 @10:28PM (#17248610)
      Indeed, he should stand by his work. But all of the websites and blogs weren't afraid to use Mr. Bernoff's report to drum up an Apple doom-and-gloom story for the sake of attracting readers.
      • by Gilmoure ( 18428 ) on Thursday December 14, 2006 @11:14PM (#17248986) Journal
        Well, everyone knows that Apple is beleaguered and ready to die. Everyone knows that!
      • by MrMickS ( 568778 ) on Friday December 15, 2006 @07:31AM (#17252874) Homepage Journal
        Most people didn't refer to his report. Rather they referred to an article on The Register [theregister.co.uk] instead. The author of The Register article has a history of anti-iTunes store articles and an anti-DRM agenda. He took what he wanted from the report to back up his viewpoint. The real problem is the way that this was swallowed by the rest without checking the source themselves.

        Sadly this seems to be the deal in journalism at the moment. Everything is sacrificed in order to be first to publish or, if not first then, not too far behind. Accuracy appears to be sacrificed in the race to publish.

        • Sadly this seems to be the deal in journalism at the moment. Everything is sacrificed in order to be first to publish or, if not first then, not too far behind

          I agree. At the newspaper I work for, we once had to reprint over 600 copies of our A-Section. Why? One of our reporters took a rumour about a missing girl as fact. Despite that most of us had dismissed it (as the story was being covered nationally), he chose to go with the story. When he was chewed out by the Publisher, he was venting at someone in

        • by madth3 ( 805935 )
          Sadly this seems to be the deal in journalism at the moment. Everything is sacrificed in order to be first to publish or, if not first then, not too far behind. Accuracy appears to be sacrificed in the race to publish.
          Transmetropolitan, anyone?
    • by ceoyoyo ( 59147 ) on Thursday December 14, 2006 @10:29PM (#17248616)
      A sample size of 1000 isn't necessarily inadequate... it depends on how much variance there is in the population and how big a change you're seeing. Those factors are normally summarized with a p-value, indicating how likely you are to be wrong.

      Naturally they didn't collect enough data to calculate a p-value... THAT was their mistake. Of course, nobody seems to do that, so really, it's par for the course.
      • by wrook ( 134116 ) on Thursday December 14, 2006 @11:40PM (#17249186) Homepage
        Maybe this is obvious to people and maybe this isn't, but I thought I'd just clarify this in more standard lingo.

        The sample size you need to take for doing the study is dependent upon the probability that you expect the event to occur. So for example, out of 1000 random purchases, how many do you expect to be iTunes purchases? Most people buy a lot of things on their credit cards. So my guess is that only maybe 5 out of 1000 purchases would be iTunes purchases. The rest would be clothes, gas, groceries, restaurants meals, movies, gifts, etc, etc, etc.

        Let's say I'm right. If the expected value is 5 out of 1000, what are the odds that I might find 6 or 4 purchases in that sample? Well, depending on the distribution, it's not going to be that unusual. Remember, the *average* number you find will probably be about 5. If you actually look at 1000 random purchases, the actual amount you find will vary.

        So you might find 4 or even 3 with a pretty high probability (don't know off the top of my head what that probability is -- especially since I don't know the distribution of the data). So you have a pretty high probability of reporting something like 0.3% of purchases are iTunes purchases, when the real value is 0.5%. That's a *huge* error.

        But as others have said, the guys that do these studies know their stats. They don't put out crap reports by accident. They are intentionally misleading. Any reputable report that is based on statistical analysis will give you the error bars (i.e. + or - 5% 9 times out of 10). If this report had done this it would have said something like 65% reduction in sales +- 10% 1 time out of 10 (i.e. they aren't confident about their interval) or 65% reduction in sales +- 150% 9 times out of 10 (i.e. the error bars are totally crazy). And then it would be obvious the study was totally bogus.

        Note: All numbers I've used are fictional. I took stats 20 years ago and I *really* don't remember any of the actual numbers...
        • by ceoyoyo ( 59147 ) on Friday December 15, 2006 @12:40AM (#17249992)
          Problem is, you have to know that critical "how many purchases are iTunes purchases" value.

          The proper place to start is by taking last year's data (or last month's, or whatever) and measuring that value, then measuring it again today. Then you can ask the question whether iTunes sales have changed or not. Once you've shown there's a high probability that they've decreased (that might take ten samples, or it might take millions), THEN you can talk about how much they've decreased, and what sort of error bars go on that value.

          According to the original story they DID that, collecting data from 27 months... the difference between +80% and -60% is pretty huge... either they didn't do a simple t-test on their data, this was a VERY rare fluke or they decided to release their numbers anyway.

          It definitely sounds like they were paid for their result... I wonder if maybe they didn't expect someone with much better data to come around so quickly to slap them down.

          I also like the quote in the article about iTunes 1 billion dollars in sales not making up for the 2.5 billion dollar decrease in CD sales. Sounds about right to me. I doubt I'd purposely pay for more than a third of most albums.

      • Sample size is BIG deal. Without a significantly large enough sample size you cannot generalize your results to your target population. If we are talking about a population of one million people and you have a sample size of one thousand, that's 0.1%.

        You would have useless information. You would need at least 10% to have meaningful results.

        The p-value is not how likely you are to be wrong. It is a measure of how likely your results are do to chance. The lower the number the less likely, the higher the n
        • Re: (Score:3, Insightful)

          by ceoyoyo ( 59147 )
          You're not correct. To determine whether two populations are different (which is what these guys are trying to do), you need to know both the variance in the population and the two means. Since you have to know the entire population to actually measure either, you estimate them, using a sample of that population.

          As the number of samples increases, your estimates improve. However, you can't just say "you need a 10% sample" to be accurate. It doesn't work that way. The size of the sample you need depends
    • by lelitsch ( 31136 )
      I think he should loose his job over something like this. They put out data that clearly bordered on fraudulent--if the sample size and method you quote is correct. Unfortunately, the report is not public, so I can't verify that. He should even more loose his job over the blog posting: Hehe, this is very funny, not our fault at all, it's everyone else who's wrong. Screwing up is one thing, gloating about it and blaming everybody else, including his customer, is another.

      Now, most analysts are completely wron
      • by Fred_A ( 10934 )
        I think he should loose his job over something like this.
        Why should he ? His PHB told him to write a report showing that iTMS sales were plumetting and he did. He did his job as instructed. It did require a bit of creativity, but it's not any different than any other job in marketing.
    • Re: (Score:3, Insightful)

      by kaleth ( 66639 )
      Did you actually read the article? I'm not saying you should be banned from posting again over this one, but this should be a reminder of how important it is to have a clue before commenting. You clearly neglected to do any fact checking yourself. From the blog:

      "Our credit card transaction data shows a real drop between the January post-holiday peak and the rest of the year, but with the number of transactions we counted it's simply not possible to draw this conclusion . . . as we pointed out in the report."

      Seems to me like a pretty clear admission that the sample size is too small to be reliable. He took the data he had available, analyzed it, and presented the results while noting the deficiencies in the method. Doesn't sound much like fraud to me

      • Re: (Score:2, Redundant)

        by BWJones ( 18351 ) *
        Ah, but if you read what I wrote, you would note that I said "In my business......If you present data or a theory with the suspicion that it is incorrect, that is fraud in my line of work. Note that I said "in my line of work". So, I stand by what I said. If one were to conclude that they did not have the appropriate data, then why would they report it in the first place?

      • by lendude ( 620139 ) on Friday December 15, 2006 @01:09AM (#17250304)
        That sentence you quote is having it both ways:

        "Our credit card transaction data shows a real drop (my emphasis) between the January post-holiday peak and the rest of the year, but with the number of transactions we counted it's simply not possible to draw this conclusion . . . as we pointed out in the report."

        There is no way that he can use the words "...real drop..." in the same sentence as "...it's simply not possible to draw this conclusion...". Whilst those who uncritically took the information from this 'research' and used it (doubtless with some sensationalistic agenda in mind) deserve scorn, that very sentence itself demonstrates the research to be nothing more than PR to flog the thing at $249.00 a pop. If you take out the words "real drop" and substitute "no meaningful change" then this report was clearly worth fuck-all: at least in terms of the author's now visible desire to have something sexy to sell!

        • Maybe I'm an exception, but it's obvious to me that when he says "Our credit card transaction data shows a real drop between the January post-holiday peak and the rest of the year" he's talking about his sample data. That, in their sample, there is a "real drop" in the number of transactions. He then ackonowledges that, based on their data, they couldn't draw the conclusion that iTunes sales, as a whole, experienced the same "real drop."
          • by lendude ( 620139 )
            No, I don't think you are an exception - that's the same meaning I took from the most specific reading of the sentence. However, I'm saying that I see no point in even mentioning a "real drop" in their sample data when it simply can't be extrapolated to sales as a whole, especially in a research document they are flogging: to me it appears to be a triumph of marketing over valid conclusions. "Gee, look at this dip! (PS. oh, but don't take any notice of it please!)". Cheers.
        • by radtea ( 464814 )
          There is no way that he can use the words "...real drop..." in the same sentence as "...it's simply not possible to draw this conclusion...".

          I believe the "this conclusion" he is referring to is the conclusion that iTunes sales are "collapsing" (whatever "collapsing" means.) There is a drop over time in the data they examined (which is therefore a "real drop", as opposed to the other kind, I guess) but one cannot make any strong inference about the overall state of iTunes sales from this.

          Abstraction has al
        • The problem is that one should not compare between different quarters if there is any reason that sales may be seasonal. You can have a significant drop between quarters but have a strong year over year increase. Anyone worth their salt in the retail or stock analysis industries would understand that. I would normally expect that The Register or The Inquirer to understand that, either that or they ignore it, making them hack journalists at best.
        • "Our credit card transaction data shows a real drop between the January post-holiday peak and the rest of the year, but with the number of transactions we counted it's simply not possible to draw this conclusion . . . as we pointed out in the report." (emphasis mine)

          No surprise there. As has been stated multiple times already, this isn't a valid comparison. They should be comparing the equivalent period this year, one year ago, two years ago, etc., not peak sales and slow sales.

          I predict somebody's goin

      • Seems to me like a pretty clear admission that the sample size is too small to be reliable. He took the data he had available, analyzed it, and presented the results while noting the deficiencies in the method. Doesn't sound much like fraud to me. That's just grade school reading by the way...

        So in other words, he knew from the beginning that he was spewing out bullshit. The article never should have gotten past the editors. One can argue back and forth whether the journalist should be disciplined, I'd argue for it and for an investigation of possible conflict of interest, but there's no way the editors should have let the article through as it stood. They should have been canned.

    • Actually, the sample size of 1,000 was probably fine, or at least it would have been if they had used a truly random sample of credit cards. However, it is evident from their results, that they didn't. The failure was in trying to extrapolate results from data that wasn't statistically valid.
    • by dcollins ( 135727 ) on Friday December 15, 2006 @12:47AM (#17250080) Homepage
      "Seriously though, did you *really* think that a sample size of just over 1000 purchases on credit cards obtained through a back channel source is a reliable sample size for the number of iTunes purchases?... Thats just high school statistics by the way..."

      I'm a college professor of statistics. I don't think you can actually quote a high school statistics book which says that sample size is too small. In general, a sample size of 1,000 gives 95% confidence that your result is within +/-3% of the actual result. This is *regardless* of population size - that's how statisatics work, due to the Central Limit Theorem.

      http://en.wikipedia.org/wiki/Sample_size [wikipedia.org]
      http://en.wikipedia.org/wiki/Central_limit_theorem [wikipedia.org]

      Now, the first thing that pops into my head is why only credit-card purchases? And even more fundamentally, why would the same people need to buy music, after they just went on a music-buying spree? I would think the opposite. That was the thing that made me skeptical of the report yesterday in the first place.

      • Re: (Score:3, Informative)

        by BWJones ( 18351 ) *
        You are focusing on the number when I should have put the emphasis on how the sample was selected. how about "did you *really* think that a sample size of just over 1000 purchases on credit cards obtained through a back channel source is a reliable sample size for the number of iTunes purchases?..."

        As one professor to another I am sure you also teach sampling error and experimental design, right? Additionally, it should be noted that the actual samples used for the analysis out of the total records pulled
      • by Solandri ( 704621 ) on Friday December 15, 2006 @03:29AM (#17251482)
        The 1/sqrt(N) 95% confidence interval is only safe for common phenomena. That is, if the frequency with which you measure something is in the 10%-90% range (or thereabouts). As you get to either extreme, the confidence interval remains the same, but its accuracy in terms of raw numbers decreases.

        For example, if your sample is 1000, your 95% confidence interval is 1/sqrt(1000) = +/-3%. So if your 1000 samples showed 250 occurrences, you would know that it's 95% likely that the frequency of occurrence is between 22% and 28%. So the real frequency could be between 220 occurrences or 280 occurrences per thousand. No big deal for year to year comparison purposes. Worst case a 50% drop in sales is measurable because one year you could've been low (220), and the next year high (280/2 = 140), and the change is still statistically significant (outside your confidence interval).

        For rare phenomena, this runs into a problem. Say the frequency of occurrence is 0.1%. You take 1000 samples and you measure 1 occurrence. The neophyte statistics student will say "Cool, I meansured 1 occurrence +/- 3%, so I have 95% confidence that the actual rate of occurrences is between 0.97 per thousand and 1.03 per thousand." Unfortunately, that's wrong.

        The confidence interval is based on the percentage you measured. Your confidence interval says there's a 95% chance that the actual frequency of occurrence lies between 0% and 3.1%. There is a huge, huge difference between 1 incident in a thousand and 31 incidents in a thousand, especially if you're trying to compare between two samples. One sample (year 2005) you might get 25. Next sample (year 2006) you might get 5. These are both within your confidence interval, but if you're not careful you would erroneously conclude that you have 95% confidence that sales plummeted to just 20% that of the previous year.

        Put simply, if you want to accurately measure rare phenomenon, your sample size has to be large enough that your confidence interval is significantly smaller than the rate at which that phenomenon occurs. If iTunes sales account for 0.1% of all credit card sales (which I think is a very high estimate) and you want to compare year to year changes, you probably want an accuracy of at least 1/10th the 0.1%, or a margin of error of +/- 0.01%. Your sample size needs to be large enough that your confidence interval is around the 0.01% range. That is, you need a sample size of a 100 million credit card transactions.

      • by 4D6963 ( 933028 )

        In general, a sample size of 1,000 gives 95% confidence that your result is within +/-3% of the actual result.

        True, but how many iTunes purchases will you get in a sample of 2,000 credit card accounts? I don't know, but it should be a few percents, if even a percent. Because of that +/- 3% thing (although it's not 3% anymore for 2,000 samples) you'll hardly get a very meaningful result.

        It's as if you polled 1,000 random persons twice, compared how much Ralph Nader got each time and say that he became sudd

      • Re: (Score:3, Interesting)

        by drsquare ( 530038 )
        So if you look at a thousand credit card transactions in 2005, and there are two itunes songs bought, then if you look at a thousand credit card transactions in 2006, and only one itunes song is bought, then you are 95% sure that itunes sales have halved?

        And you're a professor for which university?
    • Re: (Score:2, Informative)

      1000 would be borderline statistically insignificant. If you read the post, he actually admitted that out of those, he only really used 181. Less than two hundred users out of 6 million!? And he has the nerve to blame everyone else for "misreporting" his findings? Saying you had any significant findings in a pool that ridiculously small, without any research into those customers' other possible methods for puchasing, is ridiculous.
    • Re: (Score:2, Insightful)

      by kfg ( 145172 )
      Well, that is why people should be responsible for their reporting.

      Dude, it's a think tank "report." They deal in the amorphous and write it in weasel; 'cause it's a living paid for by the brainless. Put it under a rhetorical microscope and there's little there to be responsible for.

      The real title of this story should be "Think Tanker admits he shits for money."

      KFG
  • Sounds like just one more example of one's desired conclusion ultimately altering the testing conditions and results to match. Seems to be almost a disease in this country.
    • Re: (Score:2, Funny)

      Ah yes, truthiness. How I've missed thee...
    • Sounds like just one more example of one's desired conclusion ultimately altering the testing conditions and results to match. Seems to be almost a disease in this country.

      Anywhere there are numbers, there is going to be a group of people wanting reinterpret them. This is classic sales approach. Reminds me of the advert I saW the other day: "we will pay the first three months for you", when in actual fact they are just paying the equivalent of the amount they jacked the price up by - doh!
  • I think it's obvious he posted that on purpose to give the media an excuse to run another round of "piracy is killing the music industry!" and then cleared it up later, which btw also results in more people paying attention to the actual truth that they're doing better so people feel more inclined to use their growing service. Who the hell runs a sample size that low anyway? And is it just a coincidence that because of the low sample size, the results were so drastically different than what's really going
  • ComScore (Score:5, Informative)

    by TheRealMindChild ( 743925 ) on Thursday December 14, 2006 @10:35PM (#17248692) Homepage Journal
    ComScore [slashdot.org]. With a reputation like theirs, it must be true!
    • Re: (Score:3, Funny)

      by Goaway ( 82658 )
      Well, they if anyone would know, wouldn't they?
    • Can someone mod TheRealMindChild down? Truth isn't allowed on Slashdot.
    • by DingerX ( 847589 )
      I personally am savoring the effects of the Reality-Disotrtion Field on this whole discussion. The Forrester Group published a report, based on a small sample of opt-in consumers, where they suggest that iTunes might have hit a plateau in music sales. They provide ample documentation of their method, and admit problems related to sample size.

      Yeah, the press blows stuff out of proportion, as they almost always do with statistics. Apple's stock loses 3%.

      Then, Comscore comes out with an equally ridiculous se

  • It's his story. If he didn't mean it, he shouldn't have put it out there.

    Typical of these "research" companies, though. Completely typcial.
    • by wass ( 72082 )
      Seriously. I mean, in a nutshell, this guy is complaining that reporters did substandard research on his own substandard research.
      • Seriously. I mean, in a nutshell, this guy is complaining that reporters did substandard research on his own substandard research.
        Seriously, if this guy was in a nutshell wouldn't he be complaining about being in said nutshell and looking for a way to get out of it.
  • by joe_bruin ( 266648 ) on Thursday December 14, 2006 @10:39PM (#17248718) Homepage Journal
    Technology sector analysts, the likes of Forrester and Gartner, are essentially paid mouthpieces for their biggest clients. Whether pumping your own products or badmounthing the competition, you can count on these guys to earn their money with totally bogus conclusions.

    Find a big analyst company that will admit that Itanium is a colossal disaster, that businesses don't want and don't need Vista, that HP's supply line trouble and incompetent management are sinking the company (particularly during the Carly years), that Oracle is terribly insecure. You won't, because they all have contracts with Intel, Microsoft, HP, Oracle, etc. But they won't hesitate to beat up on Sun (how many times have they called for McNealy's resignation), AMD, Apple, and predict their doom*, and others that don't spend the kind of money on various analysis contracts.

    So sure, iTunes sales are collapsing (according to Forrester), but nobody will call Zune a turd. It's all in a day's work.

    *disclaimer: I might be considered a fanboy of one of these companies, and it's not Apple
    • Re: (Score:3, Funny)

      by badonkey ( 968937 )
      I might be considered a fanboy of one of these companies, and it's not Apple

      You're a fanboy of Forrester and Gartner, too?!?
  • I knew this... I purchased a few TV Shows over the iTunes this year, 3 to be exact. So I estimate that iTunes grew by like at least 300%.
    • But last year you didn't buy any TV shows over iTune - so their TV Shows department must have grew by 3/0 * 100% (and I don't remember what 3/0 is, even if it was reported on Slashdot)
  • Gift Cards (Score:5, Insightful)

    by Foerstner ( 931398 ) on Thursday December 14, 2006 @10:45PM (#17248770)
    For that matter, the Forrester data was based on credit card payments on the iTunes Store.

    It totally ignored the little lime-green $15 gift cards that litter the checkout stands of every Target, Best Buy, CVS Pharmacy, and Kroger in the US. Each one of those is 15 songs, and fifteen purchases that don't register as credit card transactions.
    • Unless you buy them with a credit car....
      • While the poster wasn't clear, I believe he was trying to indicate that they don't show up as ITMS credit card transactions -- they are just generic retail transactions and could be from any number of products available at those stores.
    • by punkr0x ( 945364 )
      Is the percentage of songs sold through gift cards enough to make such a huge difference? Particularly outside the christmas season?

      They are comparing two different things here. Forrester is saying, "Since January 1st, iTunes sales have fallen 65%." and ComScore is saying, "Compared to last year, iTunes sales are up 84%." Both can absolutely be correct. The ComScore numbers look like they were put together by Apple as a quick cover-up for the Forrester report. I agree that the numbers need to be looke
  • comScore is also known for installing spyware to monitor all traffic (Market Research Company Secretly Installs Spyware [slashdot.org])

    Which is apparently ok because they are getting REAL data from it!
  • by Anonymous Coward
    ...based on a random sampling of 2,000 credit card accounts...


    I ask again: from whom did they get this data?

    Is buying, selling, or redistributing such data a crime in the US, or CA in particular?

    If not, why not? I'd like to make it crime with a nasty punishment.

    If so, has the investigation started yet?

    Grrrr.
  • by defy god ( 822637 ) on Thursday December 14, 2006 @10:55PM (#17248848)

    He concludes with this statement in his blog:

    "Finally, a word for Apple. Apple is extremely stingy with information about their business and public comment. Their unwillingness to comment on the record or off about anything they're working on or any industry results beyond the basic statistics fuels speculation, pro and con, from their supporters and detractors. In the research business we like facts -- and every other technology company is more open with them. So maybe it's time for Apple to share a bit more. When the real bad news hits -- and it's inevitable, no company gets everything right -- that openness would pay off."

    To a degree, he has a point. With Apple's secrecy, articles like these are run without having all the facts. Sensationalism becomes rampant. Then he has to go and say "In the research business we like facts." All too often we read more about speculation rather than facts from these research companies. They complain secretive companies like Apple or Google don't give them enough information, but I wonder where the actual "research" in research business has gone.

    • Re: (Score:3, Insightful)

      by wass ( 72082 )
      In other words, they're complaining that Apple doesn't regularly fly these guys out on a free 'vacation' to Cupercino, feeding them luxurious 5-star dinners and hosting them in 5-star resorts, to rave about their latest vaporware hype, like other well-known software and hardware vendors do.
    • With Apple's secrecy, articles like these are run without having all the facts. Sensationalism becomes rampant.

      Sure, for a little while. But eventually people get tired of people yelling "the sky is falling". I think Google and Apple, keeping their business their business, is a good practice.
    • There is tons of real data available, without making stuff up. Look here, for example. It's just a few months old. [yahoo.com]

      Here are some more facts; I know several people who own iPods who have never purchased anything from iTunes. Maybe someone should extrapolate from there and say that iTunes has zero sales.

    • by MrMickS ( 568778 ) on Friday December 15, 2006 @07:21AM (#17252840) Homepage Journal
      Why research? Research is hard work. You have to check things and find corroborating evidence. Its much easier to get hold of a bit of data, slap it into a spreadsheet, draw a few graphs, then make statements based on what you see.

      Seriously, why would Apple release real data to these people when they see what they do with the data that they can get hold of?

  • Credit Card data (Score:3, Interesting)

    by failedlogic ( 627314 ) on Thursday December 14, 2006 @11:25PM (#17249072)
    A few readers commented when the story was posted yesterday that they were wondering "How" the credit card data was obtained. It seemed from yesterday's story and the posts that Forrester Research had obtained credit card detailed transaction lists (w/o the credit card numbers, etc, I hope!).

    So, I would like to ask, how was the data obtained and is this level of detailed information avilable for legal purchase? I'm just curious as to how much information is available about credit card puchases.
    • I'm curious about that too. My credit card statement info should be between Me and my credit card company only (and if I had one, my accountant... and my spouse, again, if I had one). I sincerely hope that the credit card company is not giving away (or selling) detailed information. Of course, I cannot recall all of the stuff in the agreement, so maybe they can sell that info.
    • by wqwert ( 968641 )
      I checked into this. Forrester's data came from its "ultimate consumer panel." This was an opt-in panel. This article explains how the panel worked. http://www.crm2day.com/news/crm/EpAEppEyVVlCsoqPmC .php [crm2day.com]

      Forrester Research, Inc. announced the launch of Forrester's Ultimate Consumer Panel(TM) (Ultimate), a single-source, opt-in, highly secure panel that electronically captures online and offline behavior from a representative group of more than 10,000 US households.

      This panel, by the way, was recently sold to another market research company.http://www.forrester.com/ER/Press/Release/ 0,1769,1101,00.html [forrester.com]

      Sorry guys -- nothing scandalous. This was totally legit research. They didn't have access to *everyone's* data or even *your* data, just those who opted in.

      • Thanks now it makes sense. I think everyone was hoping for something scandalous for tabloid reading, myself included!!

        I can now see why the group chose a sample of 1,000 persons. It was obviously a pool from their 10,000 member sample and they sould be able to calculate the validity from the larger sample.
  • by soft_guy ( 534437 ) on Thursday December 14, 2006 @11:26PM (#17249084)
    He should take a hit on credibility. He maybe should be fired. But I agree that physically injuring him is probably more than he deserves.
    • You mean he should be fired for being misquoted? Or are you insinuating that his mention that "the sample size is too small to draw conclusions" doesn't count?
    • And you are willing to take this new article at face value? Hell, this one could be flawed, too.
  • by Doc Ruby ( 173196 ) on Thursday December 14, 2006 @11:56PM (#17249388) Homepage Journal
    I've been reading Forrester, Jupiter, IDG and other pundit research papers for over a decade. They're almost always just rationalizations of some preconceived notion, some foregone conclusion that their methodology reinforces. I don't know if they plan it, or if marketing people just can't tell science from "Tang". But I don't know why anyone reads these reports expecting anything but a blast of conventional wisdom.

    Which is, of course, why everyone just takes what they write and run with it. That's the measure of success at marketing research peddlers. It's the CIO self-perpetuation. One reason why so little ever gets done right, but so much does get done without being called wrong. To blame their own market for taking them seriously when they ought not be is finally a whisper of honesty from these chattering weasels. I expect them to fix that in the next release.
  • "We're in ur cr3dit cardz... checking out ur iT00ns purchasez"
  • by mpaque ( 655244 ) on Friday December 15, 2006 @12:38AM (#17249976)
    From the testimony of Mr. Marc E. Kasowitz before the US Senate Committee on the Judiciary:

    One particularly effective illegal strategy involves the
    following scenario: the short-selling hedge fund selects a
    target company; the hedge fund then colludes with a so-called
    independent stock analyst firm to prepare a false and negative
    "research report" on the target; the analyst firm agrees not to
    release the report to the public until the hedge fund
    accumulates a significant short position in the target's stock;
    once the hedge fund has accumulated that large short position,
    the report is disseminated widely, causing the intended decline
    in the price of the target company's stock. The report that is
    disseminated contains no disclosure that the analyst was paid to
    prepare the report, or that the hedge fund dictated its
    contents, or that the hedge fund had a substantial short
    position in the target's stock. Once the false and negative
    research report -- misrepresented as "independent" -- has had
    its intended effect, the hedge fund then closes its position and
    makes an enormous profit, at the expense of the proper
    functioning of the markets, harming innocent investors who were
    unaware that the game was rigged, and damaging the target
    company itself and its employees.


    http://judiciary.senate.gov/testimony.cfm?id=1972& wit_id=5486 [senate.gov]

    Student exercise: Compare and contrast with the movement of AAPL stock shares before and after this report came out.

    • ...the report is disseminated widely, causing the intended decline in the price of the target company's stock...

      It's also interesting that this "report" came out a couple days before December stock options expiration, right when options are the cheapest so any stock moves give the greatest profit. The SEC could investigate large block options transactions in the past week, but probably won't. Too many things to do before xmas, y'know.

      • by mpaque ( 655244 )
        It's also interesting that this "report" came out a couple days before December stock options expiration

        Actually, the report was issued on December 6, presumably to whoever had ordered it. The report was then released to the press three trading days later, on Dec 11, with the first headline reports proclaiming a fall crossing the wire services later Monday.

        Hypothetically one could easily establish a large short position over three days without too much impact on the stock price, and then cover that short d
  • by Anonymous Coward
    Welcome to the "New Responsibility" where NOBODY is responsible for ANYTHING that they do!

    This guy isn't responsible for getting his data completely wrong on the iTunes story.

    The government isn't responsible for getting it wrong about WMD, having enough armor for our troops, for Katrina relief efforts or for covering the defecit that they keep increasing.

    The editor isn't responsible for his writer faking articles.

    And Microsoft isn't responsible for all of the holes in their security.

    Heck, I'm not even respo
  • Josh Bernoff, noted in his blog yesterday that they shouldn't be pummeled just because everyone took what he wrote and ran with it.


    Unless he, or people he knows, bought stock in Apple after spreading that "information".
  • who's behind this faulty report very Zune.
  • The origin of most of the down and gloom around iTunes sales was an article written on The Register (http://www.theregister.com) by Andrew Orlowski. Whilst looking around trying to find some beef behind the report I found article after article refering to the news and linking back to the article on The Register. I couldn't find any other article that linked elsewhere or that appeared to have read the Forrester report.

    This shows two faults. The first is the lack of checking that goes on by web based servic

  • by pla ( 258480 ) on Friday December 15, 2006 @07:27AM (#17252862) Journal
    based on a random sampling of 2,000 credit card accounts,

    Ummm... Now, I harbor no delusions that my credit card history really counts as a secret - Obviously my CC company has it and uses it to market bizarre crap to me, and they'd turn it over to the government without thinking twice about it.

    But how does some guy just go and "randomly sample" 2000 cards' histories? If I wanted to validate his study, could I do the same?


    Something doesn't seem right here, and I don't think most people would like the "how" either way.
    • The credit card histories were selected from a pool of people that allowed the research firm to be copied on all credit card transactions for a given window of time. This should have set off alarm bells for anyone reading the report. By relying on only those willing to hand over a history of their financial transactions, they've already fundamentally broken any chance at having a both a truly random selection and a selection that is representative of the population at large.
  • a new study shows that studies are not factual at all and are used as propaganda by large powerful companies.
  • ...analysts have deduced that Apple's revenues from sales OS X are negligible compared to sales of Macs, and have concluded from this that Apple's death is imminent.
  • How stupid do you have to be to believe that a random sample of only 2,000 credit card accounts could show any kind of trend as it relates to iTunes sales. Only a small fraction of people with credit cards buy music on iTunes anyway. A random sample of 2,000 isn't going to show you any kind of trend when the the overall number of people who use their card for that kind of activity is so small. You'd need either need a much, much larger random sample or a targeted sample of likely online music buyers to dete
  • by PopeJM ( 956574 )
    I remember reading the previous article and thinking that this article that I am replying to was forthcoming.
  • iTunes sales have always been doing well, never poorly. Last quarter an increase in demand in songs was met with success. As a result, the download quota has been increased from 5 to 3 ^H^H^H^H^H from 2 to 3 songs per month!

    Death to Goldstein!
  • there are three kinds of lies in this world: lies, damn lies, and statistics.
    • I've only got a basic understanding of statistics, but I believe this folk wisdom is an unfair comment. Statistics are an excellent tool to summarize data and make predictive statements, but they're only as good as the data put into them. Only people can lie.
  • by mschuyler ( 197441 ) on Friday December 15, 2006 @02:41PM (#17258920) Homepage Journal
    I thought it would be fun to compare slashdot comments to the previous posting to see how many geniuses out there fell into it with "I told you so," "It's because Apple is a big meanie," "Songs are no good," and similiar contributions. But I have to say after reading through the previous posting's comments, though there were a few like the above, the vast majority of slashdotters called it correctly and said the previous study was flawed, giving all the reasons why. Impressive!

"If it ain't broke, don't fix it." - Bert Lantz

Working...