scholarly journals Use of Semantic Segmentation for Increasing the Throughput of Digitisation Workflows for Natural History Collections

Author(s):  
Abraham Nieva de la Hidalga ◽  
David Owen ◽  
Irena Spacic ◽  
Paul Rosin ◽  
Xianfang Sun

The need to increase global accessibility to specimens while preserving the physical specimens by reducing their handling motivates digitisation. Digitisation of natural history collections has evolved from recording of specimens’ catalogue data to including digital images and 3D models of specimens. The sheer size of the collections requires developing high throughput digitisation workflows, as well as novel acquisition systems, image standardisation, curation, preservation, and publishing. For instance, herbarium sheet digitisation workflows (and fast digitisation stations) can digitise up to 6,000 specimens per day; operating digitisation stations in parallel can increase that capacity. However, other activities of digitisation workflows still rely on manual processes which throttle the speed with which images can be published. Image quality control and information extraction from images can benefit from greater automation. This presentation explores the advantages of applying semantic segmentation (Fig. 1) to improve and automate image quality management (IQM) and information extraction from images (IEFI) of physical specimens. Two experiments were designed to determine if IQM and IEFI activities can be improved by using segments instead of full images. The time for segmenting full images needs to be considered for both IQM and IEFI. A semantic segmentation method developed by the Natural History Museum (Durrant and Livermore 2018) adapted for segmenting herbarium sheet images (Dillen et al. 2019) can process 50 images in 12 minutes. The IQM experiments evaluated the application of three quality attributes to full images and to image segments: colourfulness (Fig. 2), contrast (Fig. 3) and sharpness (Fig. 4). Evaluating colourfulness is an alternative to colour quantization algorithms such as RMSE and Delta E (Hasler and Suesstrunk 2003, Palus 2006), the method produces a value indicating if the image degrades after processing. Contrast measures the difference in luminance or colour that makes an object distinguishable. Contrast is determined by the difference in colour and brightness of the object and other objects within the same field of view (Matkovic et al. 2005, Präkel 2010). Sharpness encompasses the concepts of resolution and acutance (Bahrami and Kot 2014, Präkel 2010). Sharpness influences specimen appearance and readability of information from labels and barcodes. Evaluating the criteria on 56 barcodes and 50 colour charts segments extracted from fifty images took 34 minutes (8 minutes for the barcodes and 26 minutes for colour charts). The evaluation on the corresponding full images took 100 minutes. The processing of individual segments and full images provided results equivalent to subjective manual quality management. The IEFI experiments compared the performance of four optical character recognition (OCR) programs applied to full images (Drinkwater et al. 2014) against individual segments. The four OCR programs evaluated were Tesseract 4.X, Tesseract 3.X, Abby FineReader Engine 12, and Microsoft OneNote 2013. The test was based on a set of 250 herbarium sheet images and 1,837 segments extracted from them. The results from the experiments show that there is an average OCR speed-up of 49% when using segmented images when compared to processing times for full images (Table 1). Similarly, there was an average increase of 13% in line correctness (information from lines is ordered and not fragmented (Fig. 5, Table 2 ). Additionally, the results are useful for comparing the four OCR programs, with Tesseract 3.x offering shortest processing time, while Tesseract 4.X achieving the highest scores for line accuracy (including hand written text recognition). The results suggest that IEFI could be improved by performing OCR using segments rather than whole images, leading to faster processing and more accurate outputs. The findings support the feasibility of further automation of digitisation workflows for natural history collections. In addition to increasing the accuracy and speed of IQM and IEFI activities, the explored approaches can be packaged and published, enabling automated quality management and information extraction to be offered as a service, taking advantage of cloud platforms and workflow engines.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Bin Huang ◽  
Jiaqi Lin ◽  
Jinming Liu ◽  
Jie Chen ◽  
Jiemin Zhang ◽  
...  

Separating printed or handwritten characters from a noisy background is valuable for many applications including test paper autoscoring. The complex structure of Chinese characters makes it difficult to obtain the goal because of easy loss of fine details and overall structure in reconstructed characters. This paper proposes a method for separating Chinese characters based on generative adversarial network (GAN). We used ESRGAN as the basic network structure and applied dilated convolution and a novel loss function that improve the quality of reconstructed characters. Four popular Chinese fonts (Hei, Song, Kai, and Imitation Song) on real data collection were tested, and the proposed design was compared with other semantic segmentation approaches. The experimental results showed that the proposed method effectively separates Chinese characters from noisy background. In particular, our methods achieve better results in terms of Intersection over Union (IoU) and optical character recognition (OCR) accuracy.


Author(s):  
Jane Courtney

For Visually impaired People (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in Optical Character Recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue – the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function – no small task for VIPs. In this work, a Sound-Emitting Document Image Quality Assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed No-Reference Image Quality Assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images.


Author(s):  
L. Venkata Subramaniam ◽  
Shourya Roy

The importance of text mining applications is growing proportionally with the exponential growth of electronic text. Along with the growth of internet many other sources of electronic text have become really popular. With increasing penetration of internet, many forms of communication and interaction such as email, chat, newsgroups, blogs, discussion groups, scraps etc. have become increasingly popular. These generate huge amount of noisy text data everyday. Apart from these the other big contributors in the pool of electronic text documents are call centres and customer relationship management organizations in the form of call logs, call transcriptions, problem tickets, complaint emails etc., electronic text generated by Optical Character Recognition (OCR) process from hand written and printed documents and mobile text such as Short Message Service (SMS). Though the nature of each of these documents is different but there is a common thread between all of these—presence of noise. An example of information extraction is the extraction of instances of corporate mergers, more formally MergerBetween(company1,company2,date), from an online news sentence such as: “Yesterday, New-York based Foo Inc. announced their acquisition of Bar Corp.” Opinion(product1,good), from a blog post such as: “I absolutely liked the texture of SheetK quilts.” At superficial level, there are two ways for information extraction from noisy text. The first one is cleaning text by removing noise and then applying existing state of the art techniques for information extraction. There in lies the importance of techniques for automatically correcting noisy text. In this chapter, first we will review some work in the area of noisy text correction. The second approach is to devise extraction techniques which are robust with respect to noise. Later in this chapter, we will see how the task of information extraction is affected by noise.


Author(s):  
Priti P. Rege ◽  
Shaheera Akhter

Text separation in document image analysis is an important preprocessing step before executing an optical character recognition (OCR) task. It is necessary to improve the accuracy of an OCR system. Traditionally, for separating text from a document, different feature extraction processes have been used that require handcrafting of the features. However, deep learning-based methods are excellent feature extractors that learn features from the training data automatically. Deep learning gives state-of-the-art results on various computer vision, image classification, segmentation, image captioning, object detection, and recognition tasks. This chapter compares various traditional as well as deep-learning techniques and uses a semantic segmentation method for separating text from Devanagari document images using U-Net and ResU-Net models. These models are further fine-tuned for transfer learning to get more precise results. The final results show that deep learning methods give more accurate results compared with conventional methods of image processing for Devanagari text extraction.


Author(s):  
Geoffrey Ower ◽  
Dmitry Mozzherin

Being able to quickly find and access original species descriptions is essential for efficiently conducting taxonomic research. Linking scientific name queries to the original species description is challenging and requires taxonomic intelligence because on average there are an estimated three scientific names associated with each currently accepted species, and many historical scientific names have fallen into disuse from being synonymized or forgotten. Additionally, non-standard usage of journal abbreviations can make it difficult to automatically disambiguate bibliographic citations and ascribe them to the correct publication. The largest open access resource for biodiversity literature is the Biodiversity Heritage Library (BHL), which was built by a consortium of natural history institutions and contains over 200,000 digitized volumes of natural history publications spanning hundreds of years of biological research. Catalogue of Life (CoL) is the largest aggregator of scientific names globally, publishing an annual checklist of currently accepted scientific names and their historical synonyms. TaxonWorks is an integrative web-based workbench that facilitates collaboration on biodiversity informatics research between scientists and developers. The Global Names project has been collaborating with BHL, TaxonWorks, and CoL to develop a Global Names Index that links all of these services together by finding scientific names in BHL and using the taxonomic intelligence provided by CoL to conveniently link directly to the page referenced in BHL. The Global Names Index is continuously updated as metadata is improved and digitization technologies advance to provide more accurate optical character recognition (OCR) of scanned texts. We developed an open source tool, “BHLnames,” and launched a restful application programming interface (API) service with a freely available Javascript widget that can be embedded on any website to link scientific names to literature citations in BHL. If no bibliographic citation is provided, the widget will link to the oldest name usage in BHL, which often is the original species description. The BHLnames widget can also be used to browse all mentions of a scientific name and its synonyms in BHL, which could make the tool more broadly useful for studying the natural history of any species.


2019 ◽  
Vol 18 (6) ◽  
pp. 1381-1406 ◽  
Author(s):  
Lukáš Bureš ◽  
Ivan Gruber ◽  
Petr Neduchal ◽  
Miroslav Hlaváč ◽  
Marek Hrúz

An algorithm (divided into multiple modules) for generating images of full-text documents is presented. These images can be used to train, test, and evaluate models for Optical Character Recognition (OCR). The algorithm is modular, individual parts can be changed and tweaked to generate desired images. A method for obtaining background images of paper from already digitized documents is described. For this, a novel approach based on Variational AutoEncoder (VAE) to train a generative model was used. These backgrounds enable the generation of similar background images as the training ones on the fly.The module for printing the text uses large text corpora, a font, and suitable positional and brightness character noise to obtain believable results (for natural-looking aged documents). A few types of layouts of the page are supported. The system generates a detailed, structured annotation of the synthesized image. Tesseract OCR to compare the real-world images to generated images is used. The recognition rate is very similar, indicating the proper appearance of the synthetic images. Moreover, the errors which were made by the OCR system in both cases are very similar. From the generated images, fully-convolutional encoder-decoder neural network architecture for semantic segmentation of individual characters was trained. With this architecture, the recognition accuracy of 99.28% on a test set of synthetic documents is reached.


Author(s):  
Jiapeng Wang ◽  
Tianwei Wang ◽  
Guozhi Tang ◽  
Lianwen Jin ◽  
Weihong Ma ◽  
...  

Visual information extraction (VIE) has attracted increasing attention in recent years. The existing methods usually first organized optical character recognition (OCR) results in plain texts and then utilized token-level category annotations as supervision to train a sequence tagging model. However, it expends great annotation costs and may be exposed to label confusion, the OCR errors will also significantly affect the final performance. In this paper, we propose a unified weakly-supervised learning framework called TCPNet (Tag, Copy or Predict Network), which introduces 1) an efficient encoder to simultaneously model the semantic and layout information in 2D OCR results, 2) a weakly-supervised training method that utilizes only sequence-level supervision; and 3) a flexible and switchable decoder which contains two inference modes: one (Copy or Predict Mode) is to output key information sequences of different categories by copying a token from the input or predicting one in each time step, and the other (Tag Mode) is to directly tag the input sequence in a single forward pass. Our method shows new state-of-the-art performance on several public benchmarks, which fully proves its effectiveness.


Sign in / Sign up

Export Citation Format

Share Document