The Best Practices Exchange has posted their 2012 conference presentations online. These presentations focus on managing e-records, web archiving, managing digital photographs, and disaster recovery of digital material.
You can link to all of the presentations here: http://bpexchange.org/
An article written by Scott G. Ainsworth, Ahmed AlSum, Hany SalahEldeen, Michele C. Weigle, and Michael L. Nelson from Old Dominion University focuses on web archiving and looks at how much of the web is archived.
Check out the article here: http://arxiv.org/pdf/1212.6177v2.pdf
The North Carolina State Government Web Site Archives (WSA) contains a range of archived websites created by state agencies. The project website has links to their formal archiving guidelines, capture standards, and their procedures.
In 2009 the National Archives of Australia released their newest version of their Xena digital preservation software. As stated on their website, “Xena aids in the long term preservation of digital records. Xena is an acronym meaning Xml Electronic Normalising for Archives.” It detects the file formats of digital objects and converts digital objects into open formats for preservation. It has been used in the preservation of a range of electronic formats, such as websites and email.
General site: http://xena.sourceforge.net/index.php
Tool for converting email Outlook PST files to XML: http://xena.sourceforge.net/help.php?page=setemail.html
In 2008 the University of London Computer Centre and JISC published “PoWR The Preservation of Web Resources Handbook.” As stated in the handbook, “The first part deals with web resources and makes practical suggestions for their management, capture, selection, appraisal and preservation. It includes observations on web content management systems, and a list of available tools for performing web capture. It concludes with a discussion of Web 2.0 issues, and a range of related case studies. The second part is more focused on web resources within an Institution. It offers advice about institutional drivers and policies for web archiving.”
In 2009 the International Internet Preservation Consortium (IIPC) published a report titled, “Long-Term Preservation of Web Archives – Experimenting with Emulation and Migration Methodologies.” The report was written by Andrew Stawowczyk Long, at the National Library of Australia. The report explores practical implementation of preservation actions, such as emulation and migration methodologies, in relation to long-term preservation of meaningful access to content collected in web archives.
This past June the International Internet Preservation Consortium published a report titled, “Harvesting Practices Report.” The report was written by Michaela Mayr at the Web@rchive Austria, Austrian National Library. It analyzes the current Internet archiving processes and experiences amongst IIPC members to provide a benchmark and an overview of current web archiving practices.
In June 2011 the Oxford Internet Institute, the University of Oxford, and the International Internet Preservation Consortium (IIPC) published a new report titled, “Web Archives: The Future(s).” The report examines ways to encourage new users and uses of web archives, new models of web archiving, and new modes of engaging with researchers.
The Memento framework is being created, “to make it as straightforward to access the Web of the past as it is to access the current Web. If you know the URI of a Web resource, the technical framework proposed by Memento allows you to see a version of that resource as it existed at some date in the past, by entering that URI in your browser like you always do and by specifying the desired date in a browser plug-in. Or you can actually browse the Web of the past by selecting a date and clicking away. Whatever you land upon will be versions of Web resources as they were around the selected date.”
Check out the project website: http://www.mementoweb.org/
In January 2010 the National Library of New Zealand published a paper discussing concerns raised during their first web harvest in October 2008, specifically the notification period, the robots policy, and the location of the harvester. They worked with the Internet Archive to do their web crawls and harvesting.
Here is a link to the paper http://www.natlib.govt.nz/about-us/catalogues/library-documents/harvest-options-paper
Here is a link to current updates on their web harvest project http://www.natlib.govt.nz/about-us/current-initiatives/web-harvest-2010