Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
United Kingdom The Internet IT

British Library To Archive One Billion UK Websites 89

An anonymous reader writes "The British Library is to begin archiving the entire UK web, including one billion pages from 4.8 million websites, blogs, forums and social media sites. The process will take five months, with the aim of presenting a more complete picture of news events for future generations to read and learn from."
This discussion has been archived. No new comments can be posted.

British Library To Archive One Billion UK Websites

Comments Filter:
  • archive.org? (Score:5, Interesting)

    by denpun ( 1607487 ) on Sunday April 07, 2013 @03:58AM (#43383075)

    Why not work with the good folks at archive.org and their Internet wayback machine [archive.org]?

    Is it not a similar idea?

    The Internet Wayback Machine folks could use the funding and would be achieving the same purpose, albeit not in a format that the library folks might want....but they could come to agreement.

    • by denpun ( 1607487 )

      Was not able to access the article linked btw. (or parent site for that matter). /.ed already?

    • Re:archive.org? (Score:5, Insightful)

      by kaiidth ( 104315 ) on Sunday April 07, 2013 @04:57AM (#43383225)

      Without wishing to offend it, the BL is a monolithic organisation that doesn't always play well with others. Part of that is because funding doesn't always work that way. You can get money for claiming that you are going to do the very first über-awesome UK archive, but your chances of receiving the funding becomes rather lower if in the very first breath you point out that somebody else has been doing pretty much this for a decade. Another part of it is: most politicians would likely want the national heritage, such as it is (jubilee celebration tweets - please...) to be held by that nation's own national library.

      I would imagine the BL have referenced archive.org work extensively, but differentiate this project with what tits in suits like to call "a compelling USP." To put it in plain English, they'll have a neat explanation that suggests that they are totally aware of previous work in the domain whilst making sure that this project looks a) different, b) excitingly new and c) contextually, better.

      • by Anonymous Coward

        Without wishing to offend it, the BL is a monolithic organisation that doesn't always play well with others.

        Where 'others' also includes people who might wish to make use of the library, but are refused admission despite a research case. Whereas all UK undergraduates are automatically granted access.

      • Without wishing to offend it, the BL is a monolithic organisation that doesn't always play well with others.

        And you REALLY don't want to piss off their Rare Book Retrieval Unit!

      • by ibwolf ( 126465 )

        I would imagine the BL have referenced archive.org work extensively

        They've actually worked closely with the Internet Archive for many many years. This includes commissioning IA to conduct crawls for them of government sites.

        Both the BL and IA are members of the International Internet Preservation Consortium (IIPC see: http://netpreserve.org./ [netpreserve.org.] Both are very familiar with what the other is doing in this space.

        So why not let IA do all the work? There are several reasons. Part of it is that the BL is responsible for web archiving as far as British cultural heritage is concerne

        • by kaiidth ( 104315 )

          See, what you're saying is both sensible and unsurprising, but here's what bothers me: TFA doesn't acknowledge any of what you are saying. Instead, it suggests this is a novel activity, which seems ridiculous but happens for political reasons.

    • by Anonymous Coward
      The British Library will probably use the same techniques as internet archive.

      Some reasons:
      * internet archive may bankrupt and the material may be lost. Government libraries may have - in theory at least - more reliable funding to preserve the material.
      * it is easier to do targeted crawling (of specific themes) using your own workers than 3rd party company
      * there are some legal matters that may make it more "illegal" for the 3rd party to do the crawling than if the government organization does it (as specif
    • Why not work with the good folks at archive.org and their Internet wayback machine [archive.org]?

      Is it not a similar idea?

      The Internet Wayback Machine folks could use the funding and would be achieving the same purpose, albeit not in a format that the library folks might want....but they could come to agreement.

      This is specifically for UK web sites, and the British Library is a British institution funded by the British taxpayer. Archive.org is US-based and a separate entity.

  • by 93 Escort Wagon ( 326346 ) on Sunday April 07, 2013 @04:03AM (#43383085)

    We had a manager, some years ago, who had the bright idea of assigning one staff member the task of printing out our entire website once a month so she (the manager) could look things up easily.

  • How are they going to store the data? Isn`t this whole library idea about storing things for future generations if there has been a war or other mass scale destruction? So when "future generations" uncover this Babylonian/British collection of knowledge hundreds years later, they can still learn from the remains? What are they going to get from a 200 years old harddrive, covered in dust?
    • by 93 Escort Wagon ( 326346 ) on Sunday April 07, 2013 @04:08AM (#43383105)

      How are they going to store the data?

      They're planning to save disk space by just referencing the original page content inside of an iframe.

    • Re:Data Storage (Score:4, Informative)

      by Anonymous Coward on Sunday April 07, 2013 @04:26AM (#43383159)

      BL, and other memory institutions such as archives, apply a concept, called "Digital Preservation", to the stored data. This concept, based on the OAIS model, covers all stages of storage, administration, maintenance and retrieval of these "remains".

      Hardest part of webarchiving is not storing the data but how to render it in 200 years. They also need to store the browser, but nowadays, browsers use so much different "subrenderers" such as Flash, Java, Javascript and CSS engines and whatnot to render a page, so there is also a need to archive all those subrenderers as well.

      Best known strategy to date is to create and store emulator containers or VM's with the original software so they can be emulated in the far future.

      http://en.wikipedia.org/wiki/Open_Archival_Information_System [wikipedia.org]

    • Comment removed based on user account deletion
  • by icebike ( 68054 ) on Sunday April 07, 2013 @04:24AM (#43383157)

    Unless you do this fairly frequently, say every 6 months at a minimum, the picture left for future generations will be muddled at best.
    Its always interesting how the news changes with the passage of time, and events are seen very differently in just a few weeks.

    On 9/11 I used this Adobe's web site mining software that essentially captures every link on every page of a site and builds a large web replicate in pdf form. All the links work within that PDF, and every page on the the site is preserved. I pointed it at all the major news web sites, one large PDF for each, burned them to disk, and still have them today. (Yup, I violated a boat load of copyrights).

    Two weeks later I did it again. You would be astounded at the difference. Entire pages are missing, not just unlinked, but even when you look for them by URL that appeared in the first capture, you won't find them in the second. Other news sites kept the old stuff on line, but the links often disappeared from their own web pages so that the only way to find these pages was by following links from some other site.

    The point is, that a snapshot of the web does very little good, unless it has some collection. Looking at the archives of a newspaper from June 6 1944, wouldn't give you much of an idea of the Normandy invasion, unless you had subsequent editions from days and months forward.
    But a web site isn't a newspaper with discrete editions, it is a constantly evolving thing, and archiving it today (or any point in time) is fairly useless, but archiving it daily is largely redundant, (most stories will be the same). You can't tell which stories changed over time based solely on the dates either, so you pretty well have to grab it all.

    Why doesn't the Library simply work a deal with the Wayback Machine Internet Archive [archive.org]. They seem to have this problem fairly well thought out. Maybe they plan to do that. I can't tell because the site that wants to archive all of Britain seems slashdotted at the moment.

    It seems that libraries are about the only place that can get away with ignoring copyright these days.

    • > (Yup, I violated a boat load of copyrights).

      So, you distributed the created PDFs? If you didn't, and it's still your in private collection, when how did you violate the right of creating copies?

    • The National Library of Iceland has had a similar program for a couple of years. The national TLD is collected three times a year and made available via the Wayback Machine [archive.org]. The english version of the project's page [vefsafn.is] is rather terse, but according to the Icelandic version, selected pages are collected more frequently when warranted, e.g. political debates around election times. Icelandic law requires publishers to deposit copies of ther work with the National Library. This includes web pages so the libra
    • by dkf ( 304284 )

      Why doesn't the Library simply work a deal with the Wayback Machine Internet Archive [archive.org]. They seem to have this problem fairly well thought out. Maybe they plan to do that. I can't tell because the site that wants to archive all of Britain seems slashdotted at the moment.

      I imagine that it will eventually happen, and that it will end up enriching the archive.org system when it does. Maybe it won't happen for a year or two, but when we're talking about long term preservation, that's not so important and the global nature of the internet makes it valuable (and logical) to globally coordinate the historical archives of it as well.

      It seems that libraries are about the only place that can get away with ignoring copyright these days.

      National libraries cannot ignore copyright, but they have a special position with regards to copyright law: they're explicitly empowered to retain cop

  • They should definitely reduce the time allotted to that tea break..
  • ...typically British utter redundancy.

    • ...typically British utter redundancy.

      Yeah, we're the sort of idiots who make more than one back up of important data. What's the point of that eh?

      Hint: redundancy is somethimes a very, very good thing indeed.

  • That's going to be a lot of porn!
  • So will they being getting legal permission to host all of this copyrighted material.
    Doesn't all the individual websites won their own content, how does archive.org even get around this?
    And what about the illegal porn, cracks, hacks, and viruses?

  • by wisnoskij ( 1206448 ) on Sunday April 07, 2013 @08:33AM (#43383723) Homepage

    So the average website contains about 1 thousand pages then? That seems like a lot...

    • So the average website contains about 1 thousand pages then? That seems like a lot...

      No, it doesn't. Imagine how many pages something like the BBC website has on any particular day.

      • Yes, but you would be hard pressed in my opinion to fund more than a few hundred regular websites that contain around or more than 1000 pages. Add in every medium or larger sized forum and it really seems like 1000 is a lot. I think the mode (type of average) website would have something like 10, with a bunch more at the 50 range, and still quite a bit at a few hundred. But I really do not see many websites that have over 1000.

        I guess news sites that keep every article they ever published in the last 100 ye

  • by Martin S. ( 98249 ) on Sunday April 07, 2013 @11:30AM (#43384567) Journal
    There seem to be a few post making incorrect assumption and raising questions. I was involved as a technical architect on the long term preservation store aspect of this project few years ago.

    archive.org The BL is already cooperating with a number of other organisations do the same thing thing, including the archive.org, the Smithsonian, Scottish, French, Australian, Canadian and quite few other National Libraries. archive.org has been an important technology spike for these but is not the whole solution.

    Preservation BL has a legal responsibility to preserve it's archive, including this content essentially forever; which is a significant technology challenge.

    Legal archive.org is essentially opt in; the BL programme is legal deposit requirement. The site content for any uk tld should be collected at least once a year. An important piece of the technology puzzle is to identify these and mange this process.

    Scale The last scaling I saw placed the BL archive about two orders of magnitude larger than archive.org and growing faster. The number of new websites in .uk grows faster than the awareness of archive.org. There are a lot of challenges

    - Maintain structure and semantic context.

    - Searchable Meta Data

    - Searchable Content

    - Re-Presentation

  • The BnF (French National Library) has started doing this in 2006 for a selection of .fr websites.
    In 2011 they had 16.5*10^9 files.
    They store content on "Petaboxes" made by the Internet Archive.

    See http://www.bnf.fr/en/collections_and_services/book_press_media/a.internet_archives.html [www.bnf.fr]

  • I'm pretty late to this story, but let me clear up some misunderstandings for posterity's sake:

    Disclosure: I've been involved in this effort for at least ten years, I'm head of ICT for one of the UK Copyright Libraries (National Library of Wales), and this story goes way back to the Primary Legislation passed by the UK in 2003, and we've been working on the practicalities of this since before that legislation was passed.

    * Yes, Internet Archive and others have been archiving web sites for many years. We're u

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...