scholarly journals Information Retrieval Using Xquery Processing Techniques

2011 ◽  
Vol 3 (1) ◽  
pp. 50-58 ◽  
Author(s):  
E.J.Thomson Fredrick ◽  
G Radhamani
2008 ◽  
Vol 5 (1) ◽  
pp. 17-36 ◽  
Author(s):  
Margaret R. Garnsey ◽  
Ingrid E. Fisher

ABSTRACT: Accounting language evolves as the transactions and organizations it provides guidance for change. We provide a preliminary analysis of terms used in official accounting pronouncements and annual corporate financial statements. Initial results show statistical natural language-processing techniques provide a means of identifying new terms as they enter the lexicon. These techniques should be valuable in deriving a complete accounting lexicon as well as in constructing and maintaining an accounting thesaurus to support information retrieval.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0254937
Author(s):  
Serhad Sarica ◽  
Jianxi Luo

There are increasing applications of natural language processing techniques for information retrieval, indexing, topic modelling and text classification in engineering contexts. A standard component of such tasks is the removal of stopwords, which are uninformative components of the data. While researchers use readily available stopwords lists that are derived from non-technical resources, the technical jargon of engineering fields contains their own highly frequent and uninformative words and there exists no standard stopwords list for technical language processing applications. Here we address this gap by rigorously identifying generic, insignificant, uninformative stopwords in engineering texts beyond the stopwords in general texts, based on the synthesis of alternative statistical measures such as term frequency, inverse document frequency, and entropy, and curating a stopwords dataset ready for technical language processing applications.


2021 ◽  
Vol 20 (Number 3) ◽  
pp. 353-389
Author(s):  
Anita Ramalingam ◽  
Subalalitha Chinnaudayar Navaneethakrish

Tamil literature has many valuable thoughts that can help the human community to lead a successful and a happy life. Tamil literary works are abundantly available and searched on the World Wide Web (WWW), but the existing search systems follow a keyword-based match strategy which fails to satisfy the user needs. This necessitates the demand for a focused Information Retrieval System that semantically analyses the Tamil literary text which will eventually improve the search system performance. This paper proposes a novel Information Retrieval framework that uses discourse processing techniques which aids in semantic analysis and representation of the Tamil Literary text. The proposed framework has been tested using two ancient literary works, the Thirukkural and Naladiyar, which were written during 300 BCE. The Thirukkural comprises 1330 couplets, each 7 words long, while the Naladiyar consists of 400 quatrains, each 15 words long. The proposed system, tested with all the 1330 Thirukkural couplets and 400 Naladiyar quatrains, achieved a mean average precision (MAP) score of 89%. The performance of the proposed framework has been compared with Google Tamil search and a keyword-based search which is a substandard version of the proposed framework. Google Tamil search achieved a MAP score of 56% and keyword-based method achieved a MAP score of 62% which shows that the discourse processing techniques improves the search performance of an Information Retrieval system.


Author(s):  
Richard E. Hartman ◽  
Roberta S. Hartman ◽  
Peter L. Ramos

We have long felt that some form of electronic information retrieval would be more desirable than conventional photographic methods in a high vacuum electron microscope for various reasons. The most obvious of these is the fact that with electronic data retrieval the major source of gas load is removed from the instrument. An equally important reason is that if any subsequent analysis of the data is to be made, a continuous record on magnetic tape gives a much larger quantity of data and gives it in a form far more satisfactory for subsequent processing.


Author(s):  
R. C. Gonzalez

Interest in digital image processing techniques dates back to the early 1920's, when digitized pictures of world news events were first transmitted by submarine cable between New York and London. Applications of digital image processing concepts, however, did not become widespread until the middle 1960's, when third-generation digital computers began to offer the speed and storage capabilities required for practical implementation of image processing algorithms. Since then, this area has experienced vigorous growth, having been a subject of interdisciplinary research in fields ranging from engineering and computer science to biology, chemistry, and medicine.


Author(s):  
S. Hasegawa ◽  
T. Kawasaki ◽  
J. Endo ◽  
M. Futamoto ◽  
A. Tonomura

Interference electron microscopy enables us to record the phase distribution of an electron wave on a hologram. The distribution is visualized as a fringe pattern in a micrograph by optical reconstruction. The phase is affected by electromagnetic potentials; scalar and vector potentials. Therefore, the electric and magnetic field can be reduced from the recorded phase. This study analyzes a leakage magnetic field from CoCr perpendicular magnetic recording media. Since one contour fringe interval corresponds to a magnetic flux of Φo(=h/e=4x10-15Wb), we can quantitatively measure the field by counting the number of finges. Moreover, by using phase-difference amplification techniques, the sensitivity for magnetic field detection can be improved by a factor of 30, which allows the drawing of a Φo/30 fringe. This sensitivity, however, is insufficient for quantitative analysis of very weak magnetic fields such as high-density magnetic recordings. For this reason we have adopted “fringe scanning interferometry” using digital image processing techniques at the optical reconstruction stage. This method enables us to obtain subfringe information recorded in the interference pattern.


Author(s):  
U. Aebi ◽  
L.E. Buhle ◽  
W.E. Fowler

Many important supramolecular structures such as filaments, microtubules, virus capsids and certain membrane proteins and bacterial cell walls exist as ordered polymers or two-dimensional crystalline arrays in vivo. In several instances it has been possible to induce soluble proteins to form ordered polymers or two-dimensional crystalline arrays in vitro. In both cases a combination of electron microscopy of negatively stained specimens with analog or digital image processing techniques has proven extremely useful for elucidating the molecular and supramolecular organization of the constituent proteins. However from the reconstructed stain exclusion patterns it is often difficult to identify distinct stain excluding regions with specific protein subunits. To this end it has been demonstrated that in some cases this ambiguity can be resolved by a combination of stoichiometric labeling of the ordered structures with subunit-specific antibody fragments (e.g. Fab) and image processing of the electron micrographs recorded from labeled and unlabeled structures.


Sign in / Sign up

Export Citation Format

Share Document