Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
United Kingdom Censorship

Britain's Conservatives Scrub Speeches from the Internet 234

An anonymous reader writes news of an attempt to erase a bit of history. From the article: "The Conservative Party have attempted to delete all their speeches and press releases online from the past 10 years, including one in which David Cameron promises to use the Internet to make politicians 'more accountable'. The Tory party have deleted the backlog of speeches from the main website and the Internet Archive — which aims to make a permanent record of websites and their content — between 2000 and May 2010."
This discussion has been archived. No new comments can be posted.

Britain's Conservatives Scrub Speeches from the Internet

Comments Filter:
  • by Anonymous Coward on Wednesday November 13, 2013 @01:47PM (#45414531)

    People have used robots.txt to buy up domains they want to censor.

    For example, this happened with partyvan.

  • by Mabhatter ( 126906 ) on Wednesday November 13, 2013 @01:53PM (#45414625)

    The main character's job was "correcting" stored historical documents to match what was being said "right now".

    The reasoning why their government must keep EVERYTHING on private people, but can obstruct and hide PUBLICLY OFFERED documents has to be really really funny!

  • by Anonymous Coward on Wednesday November 13, 2013 @01:55PM (#45414657)

    There's a theory out there that states that because most of what we do in the so-called Information Age is stored is somewhat fragile digital storage systems (as opposed to, for example, parchment) historians in the future will have very little to base their research on about our age, as most of the info will be permanently lost.
    Well, hundreds of thousands of posts on BBS systems from the 80's and 90's are already gone, delete the Internet Archive and the Web is gone too, any thoughts?

  • by game kid ( 805301 ) on Wednesday November 13, 2013 @02:04PM (#45414783) Homepage

    Lol indeed. When Google aren't being ordered by the NSA (and by extension GCHQ and their political friends) to work for them, they volunteer outright. Enjoy the cache while it lasts and while they allow it, because they'll consider either an oversight.

  • 100 Years (Score:3, Interesting)

    by BringsApples ( 3418089 ) on Wednesday November 13, 2013 @02:06PM (#45414803)
    We as humans are not able to "remember" back further than 100 years. I mean that you cannot get any information from anyone that would give you a clear, practical understanding of the mindset from 100 years ago. You can go ask your grandparent(s) things about the past, but the vocabulary that they use more than likely won't fit your vocabulary and therefor you will not be able to get the understanding that they're trying for. Maybe 100 years is to small, but it can't be far from the real number, plus it's nice and round ;)

    In this way, our society(s) are going through life sorta like that movie Memento. All that has to happen is a slight variation of the real story, that would produce the same basic result, but with a new context - Christopher Columbus "discovered" America comes to mind. Perhaps the powers that be depend on this, and are looking to make that number (100 here) smaller.
  • by pixelpusher220 ( 529617 ) on Wednesday November 13, 2013 @02:24PM (#45415047)
    couple that with the google cached copy of the site has a 'search for speeches' section which now is, interestingly enough, missing as well.
  • by morgauxo ( 974071 ) on Wednesday November 13, 2013 @03:27PM (#45415687)

    The problem is that people are buying up the domain names of old websites which no longer exist just to publish a robots.txt file. Then automatically deletes, or at least blocks access to the entire history of everything that ever happened at that domain including the past website which the new owner has nothing to do with.

    I suppose they are just trying to honor site owner's wishes even when they may have initially forgotten about robots.txt and added it later. The robot doesn't know that the old content belonged to someone else who DID NOT wish to block it. Maybe a good solution is that when they notice a new robots.txt everything for the last 'X' months get deleted. (go ahead and debate values of X) Data from prior to that should be left alone. Even if it was posted by the same site owner who is posting the robots.txt today. Tough cookies! If you want to control how your data is used I don't see a problem with requiring you actually take the time to learn about things like robots.txt before you publish. It's really no different than releasing source code under the GPL and then later turning it into a closed source product. All your new work belongs to you but you don't get to force everyone to delete ever copy they might have of the old code and you can't stop them from forking it.

    -- I would totally consider an 'X value' of zero as being on the table btw

  • by mikael ( 484 ) on Wednesday November 13, 2013 @04:34PM (#45416465)

    They buy up a domain when it becomes available, set the robots.txt file to "do not archive", then the google-bot spider will send the instruction to delete all
    past archives.

    You used to be able to visit old web-pages through the google-cache. Remember when google would always have a cached copy of what you wanted to read. Nowadays they just seem to be happy to be a proxy server which records everything you download from the target webpage.

  • by msobkow ( 48369 ) on Wednesday November 13, 2013 @06:43PM (#45417951) Homepage Journal

    Here in Canada, Conservative PM Harper has taken heat lately for breaking all the links on our government's historical archive of the legislation that's been posted for the past decade or two. It's just... gone. The entire archive, except for maybe the past 5 years worth.

    That archive is public government information, not Conservative property.

"If it's not loud, it doesn't work!" -- Blank Reg, from "Max Headroom"