lexical analyzer
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 0)

H-INDEX

3
(FIVE YEARS 0)

2020 ◽  
Vol 12 (15) ◽  
pp. 6153
Author(s):  
Juhyun Lee ◽  
Jiho Kang ◽  
Sangsung Park ◽  
Dongsik Jang ◽  
Junseok Lee

This paper proposes a multi-class classification model for technology evaluation (TE) using patent documents. TE is defined as converting technology quality to its present value; it supports efficient research and development using intellectual property rights–research & development (IP–R&D) and decision-making by companies. Through IP–R&D, companies create their patent portfolios and develop technology management strategies. They protect core patents and use those patents to cooperate with other companies. In modern society, as conversion technology has been rapidly developed, previous TE methods became difficult to apply to technology. This is because they relied on expert-based qualitative methods. Qualitative results are difficult to use to guarantee objectivity. Many previous studies have proposed models for evaluating technology based on patent data to address these limitations. However, those models can lose contextual information during the preprocessing of bibliographic information and require a lexical analyzer suitable for processing terminology in patents. This study uses a lexical analyzer produced using a deep learning structure to overcome this limitation. Furthermore, the proposed method uses quantitative information and bibliographic information of patents as explanatory variables and classifies the technology into multiple classes. The multi-class classification is conducted by sequentially evaluating the value of a technology. This method returns multiple classes in order, enabling class comparison. Moreover, it is model-agnostic, enabling diverse algorithms to be used. We conducted experiments using actual patent data to examine the practical applicability of the proposed methodology. Based on the experiment results, the proposed method was able to classify actual patents into an ordered multi-class. In addition, it was possible to guarantee the objectivity of the results. This is because our model used the information in the patent specification. Furthermore, the model using both quantitative and bibliographic information exhibited higher classification performance than the model using only quantitative information. Therefore, the proposed model can contribute to the sustainable growth of companies by classifying the value of technology into more detailed categories.


Author(s):  
Joyassree Sen ◽  
Bappa Sarkar ◽  
Md. Shamim Hossain ◽  
Md. Nazrul Islam

This paper deals with the design and development of an expert sentence translation system. In this translator, the source language is English, and the target language is Bangla. The implemented translation system determines the relationship among different forms of English and Bengali sentences and makes appropriate correspondence between English and Bengali grammar. Here, we have been developing a top-down parsing program. The system incorporates itself with the dictionary and gives the corresponding Bengali meaning. The system performs translation procedure in three steps. The lexical analyzer reads the English sentence, tokenizes into words, and stores information into a stack. The lexical analyzer uses the English to Bangla dictionary and word morphology for finding lexical information. The parser parses the input sentence and identifies the types of it and finds tense, phrase, clauses, etc. The generator generates a Bangla sentence, which is equivalent to the given input English sentence. It uses the output of the lexical analyzer and the parser to make Bengali sentence. This system can translate all kinds of sentences. But the limitation is that it cannot handle semantic and contextual problems.


2019 ◽  
Vol 8 (2) ◽  
pp. 119-128
Author(s):  
Takudzwa Fadziso

In cognitive science, understanding language by humans starts with recognition. Without the phase, understanding languages become a very cumbersome task. The task of the lexical analyzer is to read the various input characters grouping them into lexemes and producing an output of a sequence of tokens. But before we discuss lexical analysis further, we should have an overview of this research. Lexical analysis is best described as tokenization that converts a sequence of characters (program) into tokens with identifiable meanings. This study aims to look at the various terms or words related to lexical structure, purpose, and how they are applied to get the required result. The lexical analysis offers researchers an idea of the structural aspect of computer language and its semantic content. The work also talks about the advantages and disadvantages of lexical analysis.


2019 ◽  
Vol 8 (4) ◽  
pp. 415
Author(s):  
Nisreen L. Abdulnabi ◽  
Hawar B. Ahmad

Lexical analysis helps the interactivity and visualization for active learning that can improve difficult concepts in automata. This study gives an implementation of two frequently used model, NFA for combination of Real and Integer data type and DFA for Double Data Type in Java this chosen model will be implemented using JFLAP. The model will also be tested using JFLAP that will accept at least FIVE (5) inputs and rejected FIVE (5) inputs. These two models are some of the different lexical analyzer generators that have been implemented for different purposes in finite automata.


2019 ◽  
Vol 8 (2) ◽  
pp. 50
Author(s):  
Zakiya Ali Nayef

Lexical analysis helps the interactivity and visualization for active learning that can improve difficult concepts in automata. This study gives a view on different lexical analyzer generators that has been implemented for different purposes in finite automata. It also intends to give a general idea on the lexical analyzer process, which will cover the automata model that is used in the various reviews. Some concepts that will be described are finite automata model, regular expression and other related components. Also, the advantages and disadvantages of lexical analyzer will be discussed. 


Author(s):  
Manish Jain ◽  
Dinesh Gopalani

The existing techniques for software testing can be used to perform only a particular type of testing, and moreover proficiency is required to write the automation test scripts using these techniques. This paper proposes a novel software testing approach using Aspect-Oriented Programming (AOP) that alone suffices for carrying out most of the types of software testing and thus obliterates the need of using distinctive tools for different types of testing. Nevertheless, AOP is a new programming paradigm and not all testers have the proficiency of working with it. Hence, a domain-specific language named Testing Aspect Generator Language (TAGL) was developed which has got a very low learning curve. Using TAGL, testers can write the testing code in the form of natural language-like statements. Further, the lexical analyzer and parser, written using lex and yacc, convert the TAGL statements into actual testing code in the form of AOP. The proposed approach was applied for the testing of widely used open source projects and remarkable bugs were detected into them. A detailed comparison as to how our approach is effective than the conventional testing techniques is provided.


2019 ◽  
Vol 8 (3) ◽  
pp. 2406-2410

Compiler is used for the purpose of converting high level code to machine code. For doing this procedure we have six steps. On these steps the syntax analyses is the second step of compiler. The lexical analyzer produce token in the output. The tokens are used as input to syntax analyzer. Syntax analyzer performs parsing operation. The parsing can be used for deriving the string from the given grammar called as derivation. It depend upon how derivation will be performed either top down or bottom up. The bottom up parsers LR (Left-to-right), SLR (simple LR) has some conflicts. To remove these conflicts we use LALR (Look ahead LR parser). The conflicts are available if the state contains minimum two or more productions. If there is one shift operation in state and other one is reduce operation it means that shift-reduce operation at the same time. Then this state is called as inadequate state. This Inadequate state problem is solved in LALR parser. Other problem with other parsers is that they have more states as compared to LALR parser. So cost will be high. But in LALR parser minimum states used and cost will automatically be reduced. LALR is also called as Minimization algorithm of CLR (Canonical LR parser).


2018 ◽  
Vol 8 (1) ◽  
pp. 68-82
Author(s):  
Swagat Kumar Jena ◽  
Satyabrata Das ◽  
Satya Prakash Sahoo

Future of computing is rapidly moving towards massively multi-core architecture because of its power and cost advantages. Almost everywhere Multi-core processors are being used now-a-days and number of cores per chip is also relatively increasing. To exploit full potential offered by multi-core architecture, the system software like compilers should be designed for parallelized execution. In the past, various significant works have been made to change the design of traditional compiler to take advantages of the future multi-core platform. This paper focuses on adapting parallelism in the lexical analysis phase of the compilation process. The main objective of our proposal is to do the lexical analysis i.e., finding the tokens in an input stream in parallel. We use the parallel constructs available in OpenMP to achieve parallelism in the lexical analysis process for multi-core machines. The experimental result of our proposal shows a significant performance improvement in the parallel lexical analysis phase as compared to sequential version in terms of time of execution.


Sign in / Sign up

Export Citation Format

Share Document