Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Media (Apple) Media Businesses Music The Almighty Buck Apple

iTunes Sales Not 'Collapsing' After All 122

john82 writes "Earlier this month we had a report from Forrester, based on a random sampling of 2,000 credit card accounts, that purported to show that iTunes sales were crashing. Now comes another survey from Reston, VA-based ComScore which indicates the exact opposite. ComScore's report which is based on actual iTunes sales shows a 84% increase during the first nine months of this year compared to the same period last year. Meanwhile the author of the Forrester report, Josh Bernoff, noted in his blog yesterday that they shouldn't be pummeled just because everyone took what he wrote and ran with it."
This discussion has been archived. No new comments can be posted.

iTunes Sales Not 'Collapsing' After All

Comments Filter:
  • by BWJones ( 18351 ) * on Thursday December 14, 2006 @10:16PM (#17248488) Homepage Journal
    Meanwhile the author of the Forrester report, Josh Bernoff, noted in his blog yesterday that they shouldn't be pummeled just because everyone took what he wrote and ran with it."

    Well, that is why people should be responsible for their reporting. In my business, when you report something, you stand by it. If you present data or a theory with the suspicion that it is incorrect, that is fraud in my line of work. Seriously though, did you *really* think that a sample size of just over 1000 purchases on credit cards obtained through a back channel source is a reliable sample size for the number of iTunes purchases? If I correctly recall, Apple announced back in February that they were selling about 3 million songs/day and if the current estimates of increases on the order of 84% are correct, your sample size is woefully under-representative. Thats just high school statistics by the way...

    I am not saying that you should lose your job over this one, but this should be a tacit reminder of how important good reporting is and if you are beyond your means or competence on a particular story or analysis, go find some help before you publish it, do some fact checking and be more careful with stories that can have a significant impact on companies and individuals.

  • by RichPowers ( 998637 ) on Thursday December 14, 2006 @10:28PM (#17248610)
    Indeed, he should stand by his work. But all of the websites and blogs weren't afraid to use Mr. Bernoff's report to drum up an Apple doom-and-gloom story for the sake of attracting readers.
  • by ceoyoyo ( 59147 ) on Thursday December 14, 2006 @10:29PM (#17248616)
    A sample size of 1000 isn't necessarily inadequate... it depends on how much variance there is in the population and how big a change you're seeing. Those factors are normally summarized with a p-value, indicating how likely you are to be wrong.

    Naturally they didn't collect enough data to calculate a p-value... THAT was their mistake. Of course, nobody seems to do that, so really, it's par for the course.
  • by joe_bruin ( 266648 ) on Thursday December 14, 2006 @10:39PM (#17248718) Homepage Journal
    Technology sector analysts, the likes of Forrester and Gartner, are essentially paid mouthpieces for their biggest clients. Whether pumping your own products or badmounthing the competition, you can count on these guys to earn their money with totally bogus conclusions.

    Find a big analyst company that will admit that Itanium is a colossal disaster, that businesses don't want and don't need Vista, that HP's supply line trouble and incompetent management are sinking the company (particularly during the Carly years), that Oracle is terribly insecure. You won't, because they all have contracts with Intel, Microsoft, HP, Oracle, etc. But they won't hesitate to beat up on Sun (how many times have they called for McNealy's resignation), AMD, Apple, and predict their doom*, and others that don't spend the kind of money on various analysis contracts.

    So sure, iTunes sales are collapsing (according to Forrester), but nobody will call Zune a turd. It's all in a day's work.

    *disclaimer: I might be considered a fanboy of one of these companies, and it's not Apple
  • Gift Cards (Score:5, Insightful)

    by Foerstner ( 931398 ) on Thursday December 14, 2006 @10:45PM (#17248770)
    For that matter, the Forrester data was based on credit card payments on the iTunes Store.

    It totally ignored the little lime-green $15 gift cards that litter the checkout stands of every Target, Best Buy, CVS Pharmacy, and Kroger in the US. Each one of those is 15 songs, and fifteen purchases that don't register as credit card transactions.
  • by defy god ( 822637 ) on Thursday December 14, 2006 @10:55PM (#17248848)

    He concludes with this statement in his blog:

    "Finally, a word for Apple. Apple is extremely stingy with information about their business and public comment. Their unwillingness to comment on the record or off about anything they're working on or any industry results beyond the basic statistics fuels speculation, pro and con, from their supporters and detractors. In the research business we like facts -- and every other technology company is more open with them. So maybe it's time for Apple to share a bit more. When the real bad news hits -- and it's inevitable, no company gets everything right -- that openness would pay off."

    To a degree, he has a point. With Apple's secrecy, articles like these are run without having all the facts. Sensationalism becomes rampant. Then he has to go and say "In the research business we like facts." All too often we read more about speculation rather than facts from these research companies. They complain secretive companies like Apple or Google don't give them enough information, but I wonder where the actual "research" in research business has gone.

  • by kaleth ( 66639 ) on Thursday December 14, 2006 @11:14PM (#17248992)
    Did you actually read the article? I'm not saying you should be banned from posting again over this one, but this should be a reminder of how important it is to have a clue before commenting. You clearly neglected to do any fact checking yourself. From the blog:

    "Our credit card transaction data shows a real drop between the January post-holiday peak and the rest of the year, but with the number of transactions we counted it's simply not possible to draw this conclusion . . . as we pointed out in the report."

    Seems to me like a pretty clear admission that the sample size is too small to be reliable. He took the data he had available, analyzed it, and presented the results while noting the deficiencies in the method. Doesn't sound much like fraud to me. That's just grade school reading by the way...

  • by wass ( 72082 ) on Thursday December 14, 2006 @11:20PM (#17249034)
    In other words, they're complaining that Apple doesn't regularly fly these guys out on a free 'vacation' to Cupercino, feeding them luxurious 5-star dinners and hosting them in 5-star resorts, to rave about their latest vaporware hype, like other well-known software and hardware vendors do.
  • by lottameez ( 816335 ) on Thursday December 14, 2006 @11:32PM (#17249126)
    I think everyone can also figure out that Apple probably didn't pay Forrester enough "research" fees.
  • by Doc Ruby ( 173196 ) on Thursday December 14, 2006 @11:56PM (#17249388) Homepage Journal
    I've been reading Forrester, Jupiter, IDG and other pundit research papers for over a decade. They're almost always just rationalizations of some preconceived notion, some foregone conclusion that their methodology reinforces. I don't know if they plan it, or if marketing people just can't tell science from "Tang". But I don't know why anyone reads these reports expecting anything but a blast of conventional wisdom.

    Which is, of course, why everyone just takes what they write and run with it. That's the measure of success at marketing research peddlers. It's the CIO self-perpetuation. One reason why so little ever gets done right, but so much does get done without being called wrong. To blame their own market for taking them seriously when they ought not be is finally a whisper of honesty from these chattering weasels. I expect them to fix that in the next release.
  • by Divebus ( 860563 ) on Thursday December 14, 2006 @11:56PM (#17249398)

    "Look at the trends here"

    Yes, there was a massive spike last Xmas that hasn't been exceeded during the 11.5 months that followed. Indeed, if you draw a line from that peak to the present, iTunes queries are down from a year ago. It's proof positive - especially if you don't know a fucking thing about statistics!!

    I can't find my ass with both hands around statistics and even I can see what's wrong with Forrester's report. So, Forrester my ass.

  • Re:Oh noes! (Score:1, Insightful)

    by Anonymous Coward on Friday December 15, 2006 @12:21AM (#17249758)
    Ahehe. You earned that "Flamebait" fair and square, fat gekko
  • by ceoyoyo ( 59147 ) on Friday December 15, 2006 @12:40AM (#17249992)
    Problem is, you have to know that critical "how many purchases are iTunes purchases" value.

    The proper place to start is by taking last year's data (or last month's, or whatever) and measuring that value, then measuring it again today. Then you can ask the question whether iTunes sales have changed or not. Once you've shown there's a high probability that they've decreased (that might take ten samples, or it might take millions), THEN you can talk about how much they've decreased, and what sort of error bars go on that value.

    According to the original story they DID that, collecting data from 27 months... the difference between +80% and -60% is pretty huge... either they didn't do a simple t-test on their data, this was a VERY rare fluke or they decided to release their numbers anyway.

    It definitely sounds like they were paid for their result... I wonder if maybe they didn't expect someone with much better data to come around so quickly to slap them down.

    I also like the quote in the article about iTunes 1 billion dollars in sales not making up for the 2.5 billion dollar decrease in CD sales. Sounds about right to me. I doubt I'd purposely pay for more than a third of most albums.
  • by lendude ( 620139 ) on Friday December 15, 2006 @01:09AM (#17250304)
    That sentence you quote is having it both ways:

    "Our credit card transaction data shows a real drop (my emphasis) between the January post-holiday peak and the rest of the year, but with the number of transactions we counted it's simply not possible to draw this conclusion . . . as we pointed out in the report."

    There is no way that he can use the words "...real drop..." in the same sentence as "...it's simply not possible to draw this conclusion...". Whilst those who uncritically took the information from this 'research' and used it (doubtless with some sensationalistic agenda in mind) deserve scorn, that very sentence itself demonstrates the research to be nothing more than PR to flog the thing at $249.00 a pop. If you take out the words "real drop" and substitute "no meaningful change" then this report was clearly worth fuck-all: at least in terms of the author's now visible desire to have something sexy to sell!

  • by kfg ( 145172 ) on Friday December 15, 2006 @03:21AM (#17251422)
    Well, that is why people should be responsible for their reporting.

    Dude, it's a think tank "report." They deal in the amorphous and write it in weasel; 'cause it's a living paid for by the brainless. Put it under a rhetorical microscope and there's little there to be responsible for.

    The real title of this story should be "Think Tanker admits he shits for money."

    KFG
  • by MrMickS ( 568778 ) on Friday December 15, 2006 @07:21AM (#17252840) Homepage Journal
    Why research? Research is hard work. You have to check things and find corroborating evidence. Its much easier to get hold of a bit of data, slap it into a spreadsheet, draw a few graphs, then make statements based on what you see.

    Seriously, why would Apple release real data to these people when they see what they do with the data that they can get hold of?

  • by pla ( 258480 ) on Friday December 15, 2006 @07:27AM (#17252862) Journal
    based on a random sampling of 2,000 credit card accounts,

    Ummm... Now, I harbor no delusions that my credit card history really counts as a secret - Obviously my CC company has it and uses it to market bizarre crap to me, and they'd turn it over to the government without thinking twice about it.

    But how does some guy just go and "randomly sample" 2000 cards' histories? If I wanted to validate his study, could I do the same?


    Something doesn't seem right here, and I don't think most people would like the "how" either way.
  • by MrMickS ( 568778 ) on Friday December 15, 2006 @07:31AM (#17252874) Homepage Journal
    Most people didn't refer to his report. Rather they referred to an article on The Register [theregister.co.uk] instead. The author of The Register article has a history of anti-iTunes store articles and an anti-DRM agenda. He took what he wanted from the report to back up his viewpoint. The real problem is the way that this was swallowed by the rest without checking the source themselves.

    Sadly this seems to be the deal in journalism at the moment. Everything is sacrificed in order to be first to publish or, if not first then, not too far behind. Accuracy appears to be sacrificed in the race to publish.

  • by ceoyoyo ( 59147 ) on Friday December 15, 2006 @01:33PM (#17257940)
    You're not correct. To determine whether two populations are different (which is what these guys are trying to do), you need to know both the variance in the population and the two means. Since you have to know the entire population to actually measure either, you estimate them, using a sample of that population.

    As the number of samples increases, your estimates improve. However, you can't just say "you need a 10% sample" to be accurate. It doesn't work that way. The size of the sample you need depends on how much the populations vary and how far apart the means are.

    Look at what you've said about the p-value. "[The p-value] is a measure of how likely your results are do [sic] to chance." The p-value is a measure of how likely your result is due simply to chance, ie, it is incorrect.

    The situation is a bit more complicated for determining confidence intervals (which should have bee these guys' SECOND step). It still doesn't depend on the magnitude of the population though. here is a page describing the process if you're interested. Note that it doesn't depend on the population size.

    Also, this [wikipedia.org] talks about some of the common rules of thumb that are used. Note that they don't depend on the magnitude of the population either.

    I hate to tell you, but likely in any important application you can think of nobody uses sample sizes anywhere near 10%. I work in medical studies and it's considered unethical to enroll more subjects in a study than are required. So you do a little pilot study to estimate the characteristics of the populations, then run the numbers to see how big N needs to be to achieve a significant result, then you propose THAT number to the ethics committee as a sample size.
  • by mschuyler ( 197441 ) on Friday December 15, 2006 @02:41PM (#17258920) Homepage Journal
    I thought it would be fun to compare slashdot comments to the previous posting to see how many geniuses out there fell into it with "I told you so," "It's because Apple is a big meanie," "Songs are no good," and similiar contributions. But I have to say after reading through the previous posting's comments, though there were a few like the above, the vast majority of slashdotters called it correctly and said the previous study was flawed, giving all the reasons why. Impressive!

Old programmers never die, they just hit account block limit.

Working...