Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Almighty Buck The Internet The Media

Web Copyright Crackdown On the Way 224

Hugh Pickens writes "Journalist Alan D. Mutter reports on his blog 'Reflections of a Newsosaur' that a coalition of traditional and digital publishers is launching the first-ever concerted crackdown on copyright pirates on the Web. Initially targeting violators who use large numbers of intact articles, the first offending sites to be targeted will be those using 80% or more of copyrighted stories more than 10 times per month. In the first stage of a multi-step process, online publishers identified by Silicon Valley startup Attributor will be sent a letter informing them of the violations and urging them to enter into license agreements with the publishers whose content appears on their sites. In the second stage Attributor will ask hosting services to take down pirate sites. 'We are not going after past damages' from sites running unauthorized content says Jim Pitkow, the chief executive of Attributor. The emphasis, Pitkow says is 'to engage with publishers to bring them into compliance' by getting them to agree to pay license fees to copyright holders in the future. Offshore sites will not be immune from the crackdown: almost all of them depend on banner ads served by US-based services, and the DMCA requires the ad service to act against any violator. Attributor says it can interdict the revenue lifeline at any offending site in the world." One possible weakness in Attributor's business plan, unless they intend to violate the robots.txt convention: they find violators by crawling the Web.
This discussion has been archived. No new comments can be posted.

Web Copyright Crackdown On the Way

Comments Filter:
  • Robots.txt (Score:3, Insightful)

    by Jaysyn ( 203771 ) on Friday March 05, 2010 @09:39AM (#31370740) Homepage Journal

    I'm sure these guys have no compunction against ignoring robots.txt if it makes them money by doing so.

    • Re:Robots.txt (Score:5, Insightful)

      by yincrash ( 854885 ) on Friday March 05, 2010 @09:43AM (#31370790)
      Seriously. Following robots.txt is not law, only convention. I'm sure it doesn't take much to convince themselves to ignore it. Money, "doing the right thing", etc. If you view the copyright infringers as pirates, then why should Attributor follow their wishes?
      • First on the chopping block:
        Slashdot for it's copy-pasted copies of linked blogs with copy-pasted copies of magazine articles copy-pasted directly from press releases.

        • by tepples ( 727027 )

          Slashdot for it's copy-pasted copies

          News publishers using Attributor probably won't attack Slashdot for excerpting one paragraph from a ten-paragraph story any time soon. From the summary:

          the first offending sites to be targeted will be those using 80% or more of copyrighted stories

          • I'm fairly sure they quote the entirety of very small articles every now and then.
            more than a few times a month? absolutely!

            I'm curious if they're going to start hitting forums when people do they "hey look at this guys" quote of a news article.
            It could really hurt a lot of free forums.

            • by Jaysyn ( 203771 )

              A lot of forums require credentials to view & have systems in place to keep automated accounts from being generated.

        • by Jaysyn ( 203771 )

          That would be interesting. While Google has a lot more money, Geeknet would be a much softer target.

        • Re: (Score:3, Funny)

          by clone53421 ( 1310749 )

          More work for the /. editors! Horror!~

      • Re:Robots.txt (Score:4, Interesting)

        by Registered Coward v2 ( 447531 ) on Friday March 05, 2010 @10:49AM (#31371642)

        Seriously. Following robots.txt is not law, only convention. I'm sure it doesn't take much to convince themselves to ignore it. Money, "doing the right thing", etc. If you view the copyright infringers as pirates, then why should Attributor follow their wishes?

        I'd go even farther to say that sites that use robot.txt to eliminate crawling are probably not major targets - if they don't show up in search engine sthen tehy probably don't generate enough traffic to be worth the effort. Sites that are high traffic are much better targets - their revenue stream form ads is prbabaly significant enough that they don't want to risk losing it. Once enough fall into line they can worry about the ones that are not indexed - in fact they may just want to kill them off to preserve traffic to licensed sites.

      • Re: (Score:3, Informative)

        Anyone interested in finding out what's really going on with a website would look at robots.txt first and ask themselves 'now why do they want the robots to avoid these pages?'

        Of course, some of those entries will be dead-ends (dynamic pages that make no sense to crawl, password protected pages that would detract from a sites rankings, etc...).

        What's going to be interesting is what happens when their method is identified and/or the IP addresses they're using to make those identifications. There is no way t

    • Re:Robots.txt (Score:5, Insightful)

      by notgm ( 1069012 ) on Friday March 05, 2010 @09:43AM (#31370798)

      is there some written law that holds people to following robots.txt? if not, how is it even possible to call it a weakness?

      • nah, it's just considered bad manners.

        • I'm quite sure that people who own websites that their robots.txt is being ignored by a crawler are going to express that in quite hostile ways, in comparison.

      • Re: (Score:2, Interesting)

        by Joe U ( 443617 )

        If they are going to extend the DMCA to other countries, then let's extend computer trespassing laws to cover robots.txt violations.

        I'm being somewhat serious (but not super-serious). If courts want to hold that a website TOS is binding, then isn't the robots.txt binding as well?

        • if a websites TOS is binding, why not just put what's in the robots.txt file in the TOS in legalese, or just state in the TOS that the robots.txt file must be obeyed or whatever?
          • Re: (Score:3, Interesting)

            by Joe U ( 443617 )

            That's the point I was trying to make. I posted this somewhere else:

            http://blog.internetcases.com/2010/01/05/browsewrap-website-terms-and-conditions-enforceable/ [internetcases.com]

            So now you can turn around and sue them for crawling your site if you specifically disallow it in the terms and robots.txt.

            The results should be interesting to watch.

            • Re: (Score:3, Insightful)

              by nacturation ( 646836 ) *

              Right... because a judge will find that offer, consideration, and acceptance of a contract took place between a webserver and a bot? The court case you cite is irrelevant to an automated program that has no understanding and cannot accept conditions presented online.

              • Re:Robots.txt (Score:4, Insightful)

                by Joe U ( 443617 ) on Friday March 05, 2010 @12:59PM (#31373260) Homepage Journal

                Right... because a judge will find that offer, consideration, and acceptance of a contract took place between a webserver and a bot? The court case you cite is irrelevant to an automated program that has no understanding and cannot accept conditions presented online.

                Awesome, so anyone can DoS a server, send mass spam or distribute a virus as long as a bot does it, because a judge will rule that the bot acted on its own and wasn't developed or set loose by anyone at all.

                If the software wrote itself you might have a point, otherwise the people who wrote it are the ones responsible for how it acts.

        • If a robot passes the Turing test, does it have to check robots.txt before it crawls the website?
          If I manually crawl through all the pages on their site and bookmark all the links, am I a robot?

          Such difficult questions... how on earth would we legislate something?

      • by Xest ( 935314 )

        Because if they use a robot, you can just identify it and feed it shit.

        It wont be long before people know the details of their crawlers and can just serve them something random.

    • If the infringing sites have a robots.txt that tells all crawlers to skip them, they will not show up in search engines. If they single out Attributor's crawler's user agent string, they would look very suspicious.
      • What if they only allow known crawlers from major search engines?

      • by rnturn ( 11092 )

        That's only if their web crawler even looks at robots.txt. It's not required, only a courtesy. I'm sure they'll not be so courteous and claim that they need to do this because the violators they're looking for would block them anyway.

        The sure fire way to keep them out would be to find out what is IP address Attributor is using and block that at your firewall. The trouble with that is they could easily change their IP address or even employ something akin to a botnet to do their web crawling so that thei

    • don't worry. They're going to break a lot of laws, break a lot of legs, and basically commit suicide. At least when it's through we'll have less dinosaur industries to deal with.

      They're literally planning to go to domain providers and threaten DMCA to get content taken down. Instead of, you know, DMCA'ing the website appropriately this is an end run around the legal process. Expect a quick smackdown. Why they would host such a company in California of all places to do this, where cali is the most clear abou

    • 1) Put up a file sharing site with lots of music and movie files.
      2) Craft a robots.txt to keep out the RIAA and MPAA.
      ...
      Profit!!!

      Robots.txt is a convention that was never intended to restrict checking for illegal content. The idea behind robots.txt is only to keep site indexers such as Google, Yahoo, etc. out of certain directories.

      Cheers,
      Dave
  • DMCA.. (Score:3, Interesting)

    by ltning ( 143862 ) <{ltning} {at} {anduin.net}> on Friday March 05, 2010 @09:42AM (#31370774) Homepage

    What on earth is the DMCA supposed to achieve, in the context of Ad-providers?

    Sounds pretty scary to me.

    • by Yvan256 ( 722131 )

      In this case they're referring to the Downloadable Media Computer Advertising.

    • by julesh ( 229690 )

      What on earth is the DMCA supposed to achieve, in the context of Ad-providers?

      Sounds pretty scary to me.

      Agreed. I've never heard of this, and a quick scan of the legislation doesn't turn up anything that appears to relate to this; the categories of service it regulates appear to be (a) telecoms providers transmitting data at user request, (b) those hosting temporary copies of content (e.g. caches), (c) those hosting content at the request of third parties, and (d) search engines, directories and other link

  • by KnownIssues ( 1612961 ) on Friday March 05, 2010 @09:54AM (#31370934)
    Sounds like they've learned their lesson from the RIAA. I'm not saying I agree with them and think they are right to do this. But, if you're going to try to enforce your interpretation of the law, this is at least a sane philosophy of doing so. Not going after damages is a smart move.
  • by elrous0 ( 869638 ) * on Friday March 05, 2010 @09:54AM (#31370936)
    A lot of aggregator sites like this one base a lot of their topical content on articles printed elsewhere. While most (incl. /.) don't print whole articles intact, a lot of them do quote heavily (what used to be called "fair use," back when that phrase actually meant anything). So their first step is to go after the sites that reprint the articles whole-cloth. But will they stop there?
    • by c-reus ( 852386 )

      Initially targeting violators who use large numbers of intact articles

      (emphasis mine)

      No, they will not stop there.

    • Re: (Score:3, Funny)

      by Yvan256 ( 722131 )

      And will Slashdot be targeted again and again? (you know... all the dupes)

    • by MtHuurne ( 602934 ) on Friday March 05, 2010 @10:30AM (#31371392) Homepage

      Unless an article is very short, quoting 80% of it is not fair use. So for now, I think they have every right to take steps against sites making money from their content without compensation.

      Yes, I am cynical enough to expect the reasonable 80% limit to be lowered over time until it reaches unreasonable levels. But let's hold the flames until they have actually crossed that line.

      • by elrous0 ( 869638 ) *
        By the time they actually cross that last line, I suspect it will be too late.
        • In my opinion preemptive protests against valid copyright enforcement only weaken the argument against copyright abuse.

      • Unless an article is very short, quoting 80% of it is not fair use.

        Well, that depends on the circumstances. The amount of the work used, and the substantiality of the portion of the work used is a factor in determing if the use is fair, but there isn't a hard number.

    • Re: (Score:3, Insightful)

      by Abcd1234 ( 188840 )

      Since when did Slashdot ever use 80% of an article verbatim?

      Sorry, no, any website doing *that* should be shut down. I hate those assholes. They're the reason why a search for a given term in Google pops up thousands of sites with the *exact same content*, just ripped from one another.

      • by elrous0 ( 869638 ) *
        Yes, but 80% is where they're *starting*. I'm asking if that's where they're going to *end* it.
        • Well, given the fair use doctrine still exists, there will always be a lower bound at which their legal actions will no longer have any basis.

        • by natehoy ( 1608657 ) on Friday March 05, 2010 @11:56AM (#31372466) Journal

          80% is a reasonable starting point. If they start lowering it, we'll have to express our righteous indignation then. Fair use, when interpreted, is generally considered a LOT lower than routinely cutting-and-pasting 80% of articles, so they have a long way to lower it before we can honestly call our indignation righteous.

          Seriously, this really isn't a "slippery slope" situation. It seems to be a well-thought-out and sane set of guidelines. If anything, they are being a bit generous for now, and they can still tighten this quite a bit without coming close to busting "fair use" or even "reasonable use".

          Basically they are saying, "if you routinely use 80%+ of our articles as your own content, we're asking you to stop. We won't sue you for any past uses, we just want to make it clear that this isn't cool any more."

          A fair usage (not the lack of quotes, I am not talking about a legal doctrine) would be to use about 20% of the source article (properly attributed) with a link back to the original article. Give credit where it's due (and cite your sources). Then add your own thoughts, or don't. But don't take whole-cloth articles and post them on your own site with your own ads.

          Every discussion board I've ever participated in has pretty much recommended some really close variant to this anyway. It usually reads something like "cite a paragraph or two at most and have a link to the source article plainly visible nearby".

  • all this harrassment is going to do will be to push the global small internet publishers to services in other countries. Datacenters, Ad services in u.s. will lose customers. There are already strong companies servicing in those areas in Eu. Eu will be happy to receive that amount of business.

    the stupor of american corporatism is overwhelming. they can even go to the extent of shooting themselves in the foot.

    • by Jaysyn ( 203771 )

      All we can do is sit back & watch the fireworks.

    • by Sockatume ( 732728 ) on Friday March 05, 2010 @10:03AM (#31371066)

      Are you kidding? ACTA's going to harmonise everything so closely to the US that they'll be able to prosecute anyone.

      • Re: (Score:3, Insightful)

        by cpghost ( 719344 )
        Yes, but ACTA is not the whole world.
        • When did that fact ever stop the US doing whatever the hell it wants ?

        • No. Just the part of the world that's intrested in doing any buissness with any other part of the world.

          Yes, It may not be North Korea.

          • Re: (Score:3, Informative)

            by cpghost ( 719344 )
            According to this [wikipedia.org], only Australia, Canada, USA, EU, Japan, South Korea, Mexico, Morocco, New Zealand, Singapore and Switzerland are currently part of that treaty. This (currently) leaves more than enough room for a whole lot of other countries (some of them as big as Russia and China) that are not part of it.
      • i dont think that france, germany, spain, scandinavian countries and rest of the eu will just sit and accept u.s. as dominator of the world information.

      • by julesh ( 229690 )

        Are you kidding? ACTA's going to harmonise everything so closely to the US that they'll be able to prosecute anyone.

        If you think Vanuatu et al are going to be signing up to ACTA, then I want some of what you're smoking.

        Sure, most of the large economies will probably be signing, but there's no reason not to base an Internet business on a little island somewhere nice with friendly laws (and, as a nice side benefit, zero taxation).

    • As I understand it, advertisers targeting readers in the United States tend to choose ad networks that operate or at least have some sort of assets in the United States, not ad networks that operate in the European Union. Advertisers who target readers in the European Union probably will not want to pay to reach readers in the United States, especially for a product not available in the United States.
      • So .. when the ad is placed the customer selects the target country / region. Using IP addresses takes care of the rest. Yes there will be some missdirection but on the whole it works.
        • by tepples ( 727027 )

          So .. when the ad is placed the customer selects the target country / region.

          So I take it you're imagining an EU based ad network that deals with advertisers in foreign markets. But how would such an ad network efficiently deal with US advertisers while having zero assets in the US or in any other country with a takedown system remotely like that of the US?

          • You log on to the advertisers site and submit your ad along with your credit card number. And you do it from a country that is not subject to US or US like control. Remember with the internet you do not need a brick and mortar or even a flesh and blood presence anyway to do business.
  • Please do so (Score:5, Insightful)

    by OzPeter ( 195038 ) on Friday March 05, 2010 @09:59AM (#31370994)
    And in the process take down all those inane blogs whose sole purpose is to scrape and repost articles so they get an advertising hit.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      While they're at it, can they take down forum/mailinglist mirrors too?

      It is extremely annoying when searching to find that the top 30 results all contain the exact same forum or blog post.

    • Re:Please do so (Score:5, Insightful)

      by garcia ( 6573 ) on Friday March 05, 2010 @10:22AM (#31371292)

      And in the process find all the commercial sites using my copyrighted Flickr photos for their own purposes without my permission or payment. I'm tired of sending invoices and dealing with companies who tell you that your photo wasn't worth the $300 you charge and instead send you $50 thinking that it will clear up the matter.

      I love the hypocrisy of all of this. They are just as much at fault as any of those aggregation blogs. They just have more money to be a pain in the ass.

      • Re: (Score:3, Interesting)

        by clone53421 ( 1310749 )

        I'm tired of sending invoices and dealing with companies who tell you that your photo wasn't worth the $300 you charge and instead send you $50 thinking that it will clear up the matter.

        They’re basically giving you the finger. Don’t fuck around playing their little games... show them you mean business. Slap on a surcharge to cover your additional expense and send their name and remaining balance to a debt collector. It’s probably cheaper and less of a hassle than suing them in small claims court.

        IANAL... you may want to ask a real lawyer what your options are, but seems to me you have a few.

        • by garcia ( 6573 )

          It's easier to make an ass out of them on the Internet. Twitter is an effective tool (especially with Google indexing it in real time) in the fight against these assholes.

          I eventually did get paid by a newspaper (include late charges) after three months. I have not been so successful with other businesses using my images in their marketing materials w/o my permission.

          Oh and debt collection (when it's $300) isn't worth my time--neither is small claims.

          • Eh, I’ve been sent to debt collection for $50 doctor visits that the bill for got lost...

            I mean, after the initial reaction (seriously?), I called them up and paid it. But yeah... seriously?

            In any case I’d think $300 is significant enough to justify going after them with some heavier ammo than just bad rep on Twitter.

  • "Offshore sites will not be immune from the crackdown: almost all of them depend on banner ads served by US-based services, and the DMCA requires the ad service to act against any violator. "

    Not sure this is such a great idea - when you're broke you don't starve off the little income you're still getting... I'm inclined to think that in the near future, things will more likely go in the opposite direction, grey-legal stuff will be fully legalized to provide some as much extra economic stimulus as possible.

  • by mdemonic ( 988470 ) on Friday March 05, 2010 @10:05AM (#31371088)

    A coalition of traditional and digital publishers this month will launch the first-ever concerted crackdown on copyright pirates on the web, initially targeting violators who use large numbers of intact articles.

    Details of the crackdown were provided by Jim Pitkow, the chief executive of Attributor, a Silicon Valley start-up that has been selected as the agent for several publishers who want to be compensated by websites that are using their content without paying licensing fees.

    In a telephone interview yesterday, Pitkow declined to identify the individual publishers in his coalition, but said they include “about a dozen” organizations representing wire services, traditional print publishers and “top-tier blog networks.”

    The first offending sites to be targeted will be those using 80% or more of copyrighted stories more than 10 times per month.

    In the first stage of a multi-step process aimed at encouraging copyright compliance instead of punishing scofflaws, Pitkow said online publishers identified by his company will be sent a letter informing them of the violations and urging them to enter into license agreements with the publishers whose content appears on their sites.

    If copyright pirates refuse to pay, Attributor will request the major search engines to remove offending pages from search results and will ask banner services to stop serving ads to pages containing unauthorized content. The search engines and ad services are required to immediately honor such requests by the federal Digital Millennium Copyright Act (DMCA).

    If the above efforts fail, Attributor will ask hosting services to take down pirate sites. Because hosting services face legal liability under the DCMA if they do not comply, they will act quickly, said Pitkow.

    “We are not going after past damages” from sites running unauthorized content said Pitkow. The emphasis, he said is “to engage with publishers to bring them into compliance” by getting them to agree to pay license fees to copyright holders in the future.

    License fees, which are set by each of the individual organizations producing content, may range from token sums for a small publisher to several hundred dollars for yearlong rights to a piece from a major publisher, said Pitkow.

    Attributor identifies copyright violators by scraping the web to find copyrighted content on unauthorized sites. A team of investigators will contact violators in an effort to bring them into compliance or, alternatively, begin taking action under DMCA.

    click the link to read the last 21%

    • Re: (Score:3, Insightful)

      by noidentity ( 188756 )

      Click the link to read the first 21%

      The first offending sites to be targeted will be those using 80% or more of copyrighted stories more than 10 times per month.

      In the first stage of a multi-step process aimed at encouraging copyright compliance instead of punishing scofflaws, Pitkow said online publishers identified by his company will be sent a letter informing them of the violations and urging them to enter into license agreements with the publishers whose content appears on their sites.

      If copyright pira

  • easy enough to search google cache and bypass the robots.txt problem....
    heck.. they SHOULD proclaim the spider name-- drum up a lot of informaiton

    and focus on sites that mention it in robots.txt to check from other sources

  • If a site posts articles yet has them excluded by robots.txt doesn't that defeat the purpose of posting the article where it can be indexed and found?

    In other words if an article is posted, but robots.txt says to not index it, that article isn't going to show up in a search. Its a bit like rebroadcasting an NFL game in a movie theatre with no one in the theatre to watch it.

  • by bcrowell ( 177657 ) on Friday March 05, 2010 @10:47AM (#31371610) Homepage
    I've had an experience with Attributor myself, and it's given me a pretty low opinion of them. I'm the author of a CC-BY-SA-licensed calculus textbook, titled "Calculus." Someone posted a copy of the pdf on Scribd, as allowed by the license. So one day I got an email from one of the people who runs Scribd, saying that Attributor had sent them a takedown notice, which they were skeptical about. Attributor hadn't supplied any useful information about what they thought was a violation. I called Scribd, and they checked and said it was a mistake -- they were working for Macmillan, which publishes another book titled "Calculus." So here they were, serving a DMCA notice under penality of perjury, and they hadn't even checked whether the name of the author was the same, or whether any of the text was the same. Their bot just found that the title, "Calculus," was the same as the title of one of their client's books. Pretty scummy.
    • by bcrowell ( 177657 ) on Friday March 05, 2010 @10:59AM (#31371784) Homepage
      Oops, important correction to the parent post: "I called Attributor, and they checked and said it was a mistake -- they were working for Macmillan..."
    • by jfengel ( 409917 )

      It sounds like this sort of thing wouldn't happen under their new tactic, which actually does compare the text rather than just the title.

      Pretty stupid of them to have sent a takedown notice based on nothing more than the title.

    • Re: (Score:3, Interesting)

      by Hatta ( 162192 )

      So, did you press charges?

  • It's not just copyright. The slow but steady alignment of copyright holders, oppressive governments, legal changes, media pressure and surveillance technology has wound itself around the internet worldwide, and now the real pressure is being applied. This is a secular change, largely unobservable over smaller intervals, but the end result is that the web in 10 and 20 years time will be a noticeably less free place than it is today. Everything you do online will be monitored, everything will be logged, everything will be legally defined and controlled, and every infringement will be subject to criminal penalties.

    The parties responsible have the support of the politicians, the censors, the press, the money men and most of the public. We used to have the support of the geeks and their creativity in bypassing censorship. But let's face it; geeks have not created a truly disruptive technology since BitTorrent almost ten years ago. While Geekdom slept, the likes of Cisco and the major Telcos have constructed a frightening array of technologies for surveillance and control of the internet, and the fruit of their efforts can be seen in China, Iran and now even countries like Australia. Soon it will be seen all over the world.

    The Web has changed. Governments are no longer going to tolerate the freedom and anarchy that it grants to the population at large. They now have the means, method and opportunity to put this genie back in the bottle. This crackdown is the first offensive on what is going to be a wide front. Expect the free net to lose.

  • by aarenz ( 1009365 ) on Friday March 05, 2010 @11:28AM (#31372158)
    I suspect that many sites that are using this type of content will find ways of hiding that fact by using non-display characters, breaking the article into multiple pages and the like to cover the fact that they are using the content. Would love to see their system in action on some test sites to figure out how much you need to do to cover the content and make it not match the original.

You know you've landed gear-up when it takes full power to taxi.

Working...