B & I on the Web (Business & Industry Database)995Richard M. Harris founder; Responsive Database Services editorial staff. B & I on the Web (Business & Industry Database). 23611 Chagrin Blvd, Suite 320, Beachwood, Ohio 44122, USA. Tel: 800‐313‐2212 or 216‐292‐9620; Fax: 216‐292‐9621: Responsive Database Services, Inc Updated daily. URL: www.bidb.com Price: $5 Annual subscription for corporations. There is a 40 percent discount for academic and public libraries

1999 ◽  
Vol 3 (1) ◽  
pp. 6-7
Author(s):  
Pauline Harris
2015 ◽  
Vol 10 (2) ◽  
pp. 144 ◽  
Author(s):  
Ann Glusker

A Review of: Maatta Smith, S. L. (2014). Web Accessibility Assessment of Urban Public Library Websites. Public Library Quarterly, 33(3), 187-204. http://dx.doi.org/10.1080/01616846.2014.937207 Abstract Objective – To determine the extent to which urban public libraries in the United States of America provide web sites which are readily accessible to individuals with disabilities with reference to the Urban Library Council’s EDGE initiative (specifically Benchmark 11, “Technology Inclusiveness”). Design – Web site evaluation. Setting – Urban public libraries in the United States of America. Subjects – The 127 library systems, which were both members of the Urban Libraries Council at the time of the study and located in the United States of America. Methods – Using the “everyday life information seeking” conceptual framework, an assessment of each of the web sites of the purposive sample of public library systems was performed by an online evaluation tool as well as visually and physically to determine web accessibility and, by extension, technology inclusiveness. Main Results – The results of the online accessibility evaluation tool revealed that not one of the sites surveyed was free of errors or alerts. Contrast errors (related to color combinations), missing alternative text (providing text alternatives for visual elements), and missing form labels (thereby preventing screen readers from performing searches and navigating to results) were the most common problems. Results of visual and physical scans revealed that many sites lacked specific links and/or resources for persons with disabilities, as well as noting that the resources available used oblique language and required many clicks to access. In addition, the vast majority neglected to feature links to national resources such as the National Library Service for the Blind and Physically Handicapped. Conclusions – The web sites of urban public libraries are not yet completely accessible for persons with disabilities. At the very least they need coding fixes and ongoing maintenance to address the kinds of issues found by the online web evaluation tool used. In addition, resources for disabled persons should be prominently and clearly linked and promoted. Further research is called for, both in non-urban library systems and in testing a wider range of access technologies. Improvement efforts should acknowledge that web design that improves access for persons with disabilities serves the broader community as well.


Author(s):  
Heidi Julien ◽  
Michelle Helliwell

With sweeping changes in the way Canadians seek and use information in recent years, public libraries have been on a quest to stake a claim in the information society. In addition, Industry Canada has named public libraries as vehicles for its 'Connecting Canadians' initiative. This paper reports the results of an analysis of public libraries' responses to these imposed roles. The web sites of Canada's 22 largest cities were analyzed...


2015 ◽  
Vol 49 (2) ◽  
pp. 205-223
Author(s):  
B T Sampath Kumar ◽  
D Vinay Kumar ◽  
K.R. Prithviraj

Purpose – The purpose of this paper is to know the rate of loss of online citations used as references in scholarly journals. It also indented to recover the vanished online citations using Wayback Machine and also to calculate the half-life period of online citations. Design/methodology/approach – The study selected three journals published by Emerald publication. All 389 articles published in these three scholarly journals were selected. A total of 15,211 citations were extracted of which 13,281 were print citations and only 1,930 were online citations. The online citations so extracted were then tested to determine whether they were active or missing on the Web. W3C Link Checker was used to check the existence of online citations. The online citations which got HTTP error message while testing for its accessibility were then entered in to the search box of the Wayback Machine to recover vanished online citations. Findings – Study found that only 12.69 percent (1,930 out of 15,211) citations were online citations and the percentage of online citations varied from a low of 9.41 in the year 2011 to high of 17.52 in the year 2009. Another notable finding of the research was that 30.98 percent of online citations were not accessible (vanished) and remaining 69.02 percent of online citations were still accessible (active). The HTTP 404 error message – “page not found” was the overwhelming message encountered and represented 62.98 percent of all HTTP error message. It was found that the Wayback Machine had archived only 48.33 percent of the vanished web pages, leaving 51.67 percent still unavailable. The half-life of online citations was increased from 5.40 years to 11.73 years after recovering the vanished online citations. Originality/value – This is a systematic and in-depth study on recovery of vanished online citations cited in journals articles spanning a period of five years. The findings of the study will be helpful to researchers, authors, publishers, and editorial staff to recover vanishing online citations using Wayback Machine.


Author(s):  
Uche Ogbuji ◽  
Mark Baker

If you search for books and other media on the Web, you find Amazon, Wikipedia, and many other resources long before you see any libraries. This is a historical problem of librarians' having started ahead of the state of the art in database technologies, and yet unable to keep up with mainstream computing developments, including the Web. As a result, libraries are left with extraordinarily rich catalogs in formats which are unsuited to the Web, and which need a lot of work to adapt for the Web. A first step towards addressing this problem, BIBFRAME is a model developed for representing metadata from libraries and other cultural heritage institutions in linked data form. Libhub is a project building on BIBFRAME to convert traditional library formats, especially MARC/XML, to Web resource pages using BIBFRAME and other vocabulary frameworks. The technology used to implement Libhub transforms MARC/XML to a semi-structured, RDF-like metamodel called Versa, from which various outputs are possible, including data-rich Web pages. The authors developed a pipeline processing technology in Python in order to address the need for high performance and scalability as well as a prodigious degree of customization to accommodate a half century of variations and nuances in library cataloging conventions. The heart of this pipelining system is in the open-source project pybibframe, and the main way to customize the transform for non-technical librarians is a pattern microlanguage called marcpatterns.py. Using marcpatterns.py recipes specialized for the first Libhub participant, Denver Public Library, further specialized from common patterns among public libraries, (FIXME - not quite sure what is being said here) The first prerelease of linked data Web pages has already demonstrated the dramatic improvement in visibility for the library and quality, curated content for the Web, made possible through the adaptive, semistructured transform from notoriously abstruse library catalog formats. This paper discusses an unorthodox approach to structured and heuristics-based transformation from a large corpus of XML in a difficult format which doesn't well serve the richness of its content. It covers some of the pragmatic choices made by developers of the system who happen to be pioneering advocates of The Web, markup, and standards around these, but who had to subordinate purity to the urgent need to effect large-scale exposure of dark cultural heritage data in difficult circumstances for a small development and maintenance team. This is a case study of where proper knowledge of XML and its related standards must combine with agile techniques and "worse-is-better" concessions to solve a stubborn problem in extracting value from cultural heritage markup.


Sign in / Sign up

Export Citation Format

Share Document