preservation repositories
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

Publications ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 39
Author(s):  
Andrew Hankinson ◽  
Donald Brower ◽  
Neil Jefferies ◽  
Rosalyn Metz ◽  
Julian Morley ◽  
...  

The Oxford Common File Layout describes a shared approach to filesystem layouts for institutional and preservation repositories, providing recommendations for how digital repository systems should structure and store files on disk or in object stores. The authors represent institutions where digital preservation practices have been established and proven over time or where significant work has been done to flesh out digital preservation practices. A community of practitioners is surfacing and is assessing successful preservation approaches designed to address a spectrum of use cases. With this context as a background, the Oxford Common File Layout (OCFL) will be described as the culmination of over two decades of experience with existing standards and practices.


2016 ◽  
Vol 1 ◽  
Author(s):  
Joyce Backus ◽  
Robert Cartolano ◽  
Christina Drummond ◽  
Agathe Gebert ◽  
Brooks Hanson ◽  
...  

Are we satisfied with the current state of global knowledge preservation? What are the current preservation methods? Who are the actors? Is this system satisfactory? What role do institutional repositories play in this process? What does the future hold for these repositories (taking into account linking efforts, publishing company concerns about revenue declines, widespread dark archiving practices, and so on)? Would new mandates help (or do we simply need to tighten existing mandates so they actually compel authors to do certain things)? And how do versions of record figure into all of this—that is, how do archiving policies (with regard to differences between pre-journal and post-journal versions) affect knowledge accuracy and transfer?


2015 ◽  
Vol 3 (4) ◽  
pp. 313-330 ◽  
Author(s):  
Mary Clarke

AbstractThe long-term care of collected and created data is an ethical obligation in the fields of archaeology and cultural heritage management. With the growing application of digital methodologies in these fields and the complexity of the resulting data, this task has become complicated. Digital data preservation firms have emerged since this methodological shift, but their policies—championing the democratization of academic data—may conflict with the legal obligations dictated by the countries where data originate. Scholars thus face an inevitable choice between two obligations, one ethical and one legal. While the amount of digital data grows and the options for preservation remain fundamentally misaligned with research norms and project workflows, the digital dilemma places the integrity of data at risk of loss. This article addresses this dilemma by evaluating the existing data publication, archiving, and preservation repositories and considering how, as solutions to the digital dilemma, they can be integrated into multiple workflows. I also propose new directions for archaeological associations, suggesting that they should establish a means of evaluation and approval for third-party preservation firms managing the future of academic research prior to their inevitable ubiquity.


2013 ◽  
Vol 42 (1) ◽  
pp. 17-30 ◽  
Author(s):  
Paul Conway

AbstractLarge-scale digitization efforts by third-party firms are the subject of no small amount of controversy and criticism, as is especially the case with Google Books. This article reports some of the findings and important implications of a rigorous multi-year quantitative and qualitative assessment of the images representing a sizable proportion of the digital surrogates created by Google and deposited in the HathiTrust, which is one of the most important large-scale preservation initiatives to emerge in higher education in the past fifty years. The population of study described here consists of Englishlanguage books and serials published before 1923 that were scanned and processed by Google between 2004 and 2010. At the time the data for the study were gathered (2011), this population consisted of approximately 1.25 million volumes or roughly 12 percent of the HathiTrust corpus. The findings suggest that the imperfection of digital surrogates is an obvious and nearly ubiquitous feature of Google Books and that such imperfection has become and will remain firmly ensconced in collaborative preservation repositories.


2010 ◽  
Vol 5 (1) ◽  
pp. 34-45 ◽  
Author(s):  
Priscilla Caplan ◽  
William R. Kehoe ◽  
Joseph Pawletko

Towards Interoperable Preservation Repositories (TIPR) is a project funded by the Institute of Museum and Library Services to create and test a Repository eXchange Package (RXP). The package will make it possible to transfer complex digital objects between dissimilar preservation repositories.  For reasons of redundancy, succession planning and software migration, repositories must be able to exchange copies of archival information packages with each other. Every different repository application, however, describes and structures its archival packages differently. Therefore each system produces dissemination packages that are rarely understandable or usable as submission packages by other repositories. The RXP is an answer to that mismatch. Other solutions for transferring packages between repositories focus either on transfers between repositories of the same type, such as DSpace-to-DSpace transfers, or on processes that rely on central translation services.  Rather than build translators between many dissimilar repository types, the TIPR project has defined a standards-based package of metadata files that can act as an intermediary information package, the RXP, a lingua franca all repositories can read and write.


2009 ◽  
Vol 4 (3) ◽  
pp. 123-136 ◽  
Author(s):  
Stephen Abrams ◽  
Sheila Morrissey ◽  
Tom Cramer

The JHOVE characterization framework is widely used by international digital library programs and preservation repositories. However, its extensive use over the past four years has revealed a number of limitations imposed by idiosyncrasies of design and implementation. With funding from the Library of Congress under its National Digital Information Infrastructure Preservation Program (NDIIPP), the California Digital Library, Portico, and Stanford University are collaborating on a two-year project to develop and deploy a next-generation architecture providing enhanced performance, streamlined APIs, and significant new features. The JHOVE2 Project generalizes the concept of format characterization to include identification, validation, feature extraction, and policy-based assessment. The target of this characterization is not a simple digital file, but a (potentially) complex digital object that may be instantiated in multiple files.


Sign in / Sign up

Export Citation Format

Share Document