trusted digital repository
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 1)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Vol 16 (1) ◽  
pp. 23
Author(s):  
Vivian B. Hutchison ◽  
Tamar Norkin ◽  
Maddison L. Langseth ◽  
Drew A. Ignizio ◽  
Lisa S. Zolly ◽  
...  

As Federal Government agencies in the United States pivot to increase access to scientific data (Sheehan, 2016), the U.S. Geological Survey (USGS) has made substantial progress (Kriesberg et al., 2017). USGS authors are required to make federally funded data publicly available in an approved data repository (USGS, 2016b). This type of public data product, known as a USGS data release, serves as a method for publishing reviewed and approved data. In this paper, we present major milestones in the approach the USGS took to transition an existing technology platform to a Trusted Digital Repository. We describe both the technical and the non-technical actions that contributed to a successful outcome.We highlight how initial workflows revealed patterns that were later automated, and the ways in which assessments and user feedback influenced design and implementation. The paper concludes with lessons learned, such as the importance of a community of practice, application programming interface (API)-driven technologies, iterative development, and user-centered design. This paper is intended to offer a potential roadmap for organizations pursuing similar goals.  


Mousaion ◽  
2019 ◽  
Vol 37 (1) ◽  
Author(s):  
Thomas Modiba ◽  
Mpho Ngoepe ◽  
Patrick Ngulube

Disruptive technologies are widely used in semi-periphery and core countries such as the United States of America, Australia, Croatia, and China to manage and preserve records. However, the same cannot be said about periphery countries, especially on the African continent. These countries, including South Africa, are struggling with the digitalisation of records, let alone the management of paper-based records. This study, conducted in the public sector in South Africa, explores literature review to critically analyse challenges to archival functions that can be mitigated through the application of artificial intelligence technologies. Findings reveal problems relating to governance in a digital environment, a lack of accountability, the high level of litigation rates, bad audit results, and a lack of or poor service delivery emanating from a breakdown in records systems in South Africa. Both paper-based and digital records management systems in the public sector in South Africa are in a state of disarray. As a result, the preservation of digital records is slowly taking place, which leads to the loss of memory for the entire public sector. It is concluded that the market is ripe for disruptive technologies such as artificial intelligence, cloud computing and blockchain in the management and preservation of records in the South African public sector. The study recommends that governmental bodies cautiously consider exploring the possibility of storing their records in a trusted digital repository cloud as an interim solution while observing legal obligations. Other technologies such as blockchain technology can also be adopted to ensure the security of records.


Author(s):  
Nicolas Cazenave

Herbaria hold large numbers of specimens: approximately 22 million herbarium specimens exist as botanical reference objects in Germany, 20 million in France and about 500 million worldwide. High resolution digital images of these specimens take up substantial bandwidth and disk space. New methods of extracting information from the specimen labels have been developed using OCR (Optical character recognition) techniques, but the exploitation of this technology for biological specimens is particularly complex due to the presence of biological material in the image with the text, the non-standard vocabularies, alongside the variation and age of the fonts. Much of the information is handwritten and natural handwriting pattern recognition is a less mature technology than OCR. Today, our system (eTDR-European Trusted digital Repository) provides the OCR technology (using Tesseract software) adapted to the requirements of herbarium specimen images and requires minimal installation in each institution. This is what we propose to make available to botanists with our portal. The goal for a museum is to be able to submit a large number of scanned images easily to a long-term archiving system in order to automatically obtain OCR texts and retrieve them by a full text search on an open data portal. Most of the images are provided for reuse through CC-BY licenses. In each case, the rights of reuse associated with the data are specified in associated metadata. This pilot was an opportunity to test the long-term storage service eTDR provided by CINES. The services (B2SAFE, B2Handle) developed by EUDAT were used to facilitate the transfer of data to the storage repository and to provide indexing services for access to that repository. This workflow that has been tested for the european project ICEDIG is presented as a poster: See the document (Suppl. material 1).


2015 ◽  
Vol 39 (121) ◽  
pp. 22-40
Author(s):  
Rebecca Grant ◽  
Marta Bustillo ◽  
Sharon Webb

In 2011 the Digital Repository of Ireland (DRI) began work on the development of an interactive national Trusted Digital Repository for contemporary and historical social and cultural data. Copyright and intellectual property rights were identified as essential areas which the DRI, as a content holder and data publisher, needed to investigate in order to develop workflows, policy and the Repository infrastructure. We established a Copyright and IP Task Force (CIPT) in January 2013 to capture and identify IP challenges from our stakeholder community and the DRI’s demonstrator collections. This report outlines the legislative context in which the CIPT worked, and how the CIPT addressed copyright challenges through the development of policies and a robust framework of legal documentation for the Repository. We also provide a case study on Orphan Works, detailing the process undertaken by the Clarke Stained Glass Studios Collection, one of DRI’s demonstrator projects, in preparing their content for online publication in the Repository.


2014 ◽  
Vol 24 (3) ◽  
pp. 189-204 ◽  
Author(s):  
Olav Hagen Sataslaatten

Purpose – This article aims to analyze the relationship between the Norwegian Noark Standard and the concepts of Open Government and Freedom of Information (FOI). Noark is the Norwegian model requirements for Electronic Documents and Records Management Systems (EDRMS). It was introduced in 1984, making it not only the world’s first model requirement for EDRMS but also, through the introduction of versions from Noark 1 to the present Noark 5, internationally the model requirement with the longest continuation of implementation. Design/methodology/approach – To better understand the technical outline and functionality of the Noark Model requirements, it is necessary to see the connection to the wider framework of the Norwegian governance legislation and its FOI Act (Norway, Freedom of Information Act, 2006) on the right of access to documents held by the public administration and public undertakings. FOI is the foundation on which the Norwegian Open Government platform (OEP) rests, as it aims to increase openness and transparency in the Norwegian society. Being one of the first national initiatives to incorporate in a single platform an up-to-date nationwide registry of metadata deriving from the EDRMS of the governmental sector, OEP is a model which could have relevance in open government settings also outside of Norway. Findings – Non-fixity and randomness in the registering of metadata decrease the possibility of systematic search and systematic retrieval, since search within records presumably requires a combination of two or more sets of metadata. Context is a crucial component in information retrieval from records, and no records contain only one metadata element. With few exceptions, a record relates to another record, and the relation between the two of them is in itself a set of metadata. If the metadata relating the two records does not follow a standardized format, retrieval possibilities will remain random. The unpredictability following inadequate search results will decrease the credibility and the trust factor which should lie imminent within the information system. The absence of adequate search results will lead to an immediate decrease in the public’s perception of the system being valid or relevant as a trusted source of information. If metadata within a governmental agency is known to be subject to non-authorised alterations, deletion on changes, trust in the authenticity and integrity of the information provided from the agency will decrease significantly. This subsequently decreases predictability in the retrieval of information within the EDRMS. The parameters securing non-alteration of metadata once locked in the Noark-compliant EDRMS, may be measured against the absence of the same in any system being compared. Originality/value – An adequate analysis describing the principles of trust embedded in the weekly or daily dissemination of metadata from the Noark databases to the OEP somehow has to explain certain parameters. These parameters within the Noark requirements eliminate the possibility of unauthorised deletion, alteration or manipulation of metadata and documents in the databases of the governmental organisations. The combination of parameters also creates context. The metadata transferred from the Noark systems to the OEP platform may never have been stored within a trusted digital repository. Transfer to the OEP happens weekly, whilst transfer to the repository of The National Archives is performed far less seldom – perhaps every tenth year. The contents of the Noark-based systems are not stored in trusted digital repositories in the governmental agencies, but remain part of the ordinary grid of servers and databases.


2008 ◽  
Vol 2 (1) ◽  
pp. 92-101 ◽  
Author(s):  
MacKenzie Smith ◽  
Reagan Moore

The MIT Libraries, the San Diego Supercomputer Center, and the University of California San Diego Libraries are conducting the PLEDGE Project to determine the set of policies that affect operational digital preservation archives and to develop standardized means of recording and enforcing them using rules engines. This has the potential to allow for automated assessment of “trustworthiness” of digital preservation archives. We are also evaluating the completeness of other efforts to define policies for digital preservation such as the RLG/NARA Trusted Digital Repository checklist and the PREMIS metadata schema. We present our results to date.


Sign in / Sign up

Export Citation Format

Share Document