scholarly journals The Study On The Applicability Of AHO-CORASICK Algorithm In Identifying Tests’ Validity

2012 ◽  
Vol 8 (1) ◽  
Author(s):  
Ed O. Omictin III ◽  
Rodrigo Gante Jr ◽  
Robby Rosa P. Villaflores ◽  
Ma. Bryne Catherine M. Marchan ◽  
Rodolfo T. Noblefranca Jr

Aho-Corasick Algorithm (ACA) is a kind of dictionary-matching algorithm that locates elements of finite set of strings within an input text. It matches all patterns “at once”, so the complexity of the algorithm is linear in the length of the patterns plus the length of the searched text plus the number of output matches. This paper discusses the applicability of Aho-Corasick algorithm in identifying test validity using the standard Guidelines in Evaluating Tests. A proposed Quiz-Zone system was developed in order to evaluate and test the applicability of the algorithm used. Quiz-Zone allows the user to create exam that will check the test’s validity. It also allows the user to choose five types of exam namely: Matching Type, Multiple Choice, Essay, True or False and Short-Answer. The researchers revealed that there are some rules in identifying test validity that ACA can’t be applied. Keywords : Aho Corasick Algorithm, string-matching algorithm, test validity

2015 ◽  
Vol 27 (2) ◽  
pp. 143-156 ◽  
Author(s):  
TANVER ATHAR ◽  
CARL BARTON ◽  
WIDMER BLAND ◽  
JIA GAO ◽  
COSTAS S. ILIOPOULOS ◽  
...  

Circular string matching is a problem which naturally arises in many contexts. It consists in finding all occurrences of the rotations of a pattern of length m in a text of length n. There exist optimal worst- and average-case algorithms for circular string matching. Here, we present a suboptimal average-case algorithm for circular string matching requiring time $\mathcal{O}$(n) and space $\mathcal{O}$(m). The importance of our contribution is underlined by the fact that the proposed algorithm can be easily adapted to deal with circular dictionary matching. In particular, we show how the circular dictionary-matching problem can be solved in average-case time $\mathcal{O}$(n + M) and space $\mathcal{O}$(M), where M is the total length of the dictionary patterns, assuming that the shortest pattern is sufficiently long. Moreover, the presented average-case algorithms and other worst-case approaches were also implemented. Experimental results, using real and synthetic data, demonstrate that the implementation of the presented algorithms can accelerate the computations by more than a factor of two compared to the corresponding implementation of other approaches.


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Anis Zouaghi ◽  
Mounir Zrigui ◽  
Georges Antoniadis ◽  
Laroussi Merhbene

We propose a new approach for determining the adequate sense of Arabic words. For that, we propose an algorithm based on information retrieval measures to identify the context of use that is the closest to the sentence containing the word to be disambiguated. The contexts of use represent a set of sentences that indicates a particular sense of the ambiguous word. These contexts are generated using the words that define the senses of the ambiguous words, the exact string-matching algorithm, and the corpus. We use the measures employed in the domain of information retrieval, Harman, Croft, and Okapi combined to the Lesk algorithm, to assign the correct sense of those proposed.


Sign in / Sign up

Export Citation Format

Share Document