scholarly journals Application Note: Syntax Parsing and Feature Differences Between HSPICE and Xyce 6.11.

2019 ◽  
Author(s):  
Peter Sholander
Keyword(s):  
Author(s):  
Anna Maria Di Sciullo ◽  
Sandiway Fong
Keyword(s):  

Author(s):  
XIAOYU GAO ◽  
HU YUE ◽  
L. LI ◽  
QINGSHI GAO

The syntax of different natural languages are different, hence the parsing of different natural languages are also different, thus leadings to structures of their parsing-trees being different. The reason that the sentences in different natural languages can be translated to each other is that they have the same meaning. This paper discusses a new sentence parsing, called semantic-parsing, based on semantic units theory. It is a new theory where a sentence of a natural language is not regarded as of words and phrases arranged linearly; rather it is expected to consist of semantic units with or without type-parameters. This is a new parsing approach where the syntax-parsing-tree and semantic-parsing-tree are isomorphic. It is also a new approach in which the structure-trees of the sentences in all different natural languages can correspond.


Author(s):  
Mary Holstege

In the mid-eighties a group at Stanford built the MUIR language-development environment as a system for notation design with rendering and layout from the abstract syntax, parsing from concrete syntax, and semi-automated transformation between language variants. We developed models for representing documents at all levels and understanding how the levels relate to one another. Presentation widgets have a purpose: to convey specific abstract syntax relationships. Having an account of what kinds of widgets there are, what kinds of abstract relationships there are, and how the two connect allows for an analysis of how the notation works as a whole. The concept of "notation" taken here is a broad one, encompassing programming or technical notations as well as the form of structured documents of various kinds. Notation designers can apply such an analysis to improve their designs so that the structure is more clearly conveyed by the concrete syntax or so that humans can more readily use the notation without confusion. Software can render or parse instances of notations using rules that capture the concrete syntax, the abstract syntax, and the rules between them in a declarative.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Qiuping Huang ◽  
Liangye He ◽  
Derek F. Wong ◽  
Lidia S. Chao

This paper investigates the recognition of unknown words in Chinese parsing. Two methods are proposed to handle this problem. One is the modification of a character-based model. We model the emission probability of an unknown word using the first and last characters in the word. It aims to reduce the POS tag ambiguities of unknown words to improve the parsing performance. In addition, a novel method, using graph-based semisupervised learning (SSL), is proposed to improve the syntax parsing of unknown words. Its goal is to discover additional lexical knowledge from a large amount of unlabeled data to help the syntax parsing. The method is mainly to propagate lexical emission probabilities to unknown words by building the similarity graphs over the words of labeled and unlabeled data. The derived distributions are incorporated into the parsing process. The proposed methods are effective in dealing with the unknown words to improve the parsing. Empirical results for Penn Chinese Treebank and TCT Treebank revealed its effectiveness.


Author(s):  
Wei Wang ◽  
Degen Huang ◽  
Jingxiang Cao
Keyword(s):  

2013 ◽  
Vol 64 (2-3) ◽  
Author(s):  
Alexander Meyer

Im Projekt DBpedia werden unter anderem Informationen aus Wikipedia-Artikeln in RDF-Tripel umgewandelt. Dabei werden jedoch nicht die Artikeltexte berücksichtigt, sondern vorrangig die sogenannten Infoboxen, die Informationen enthalten, die bereits strukturiert sind. Im Rahmen einer Masterarbeit am Institut für Bibliotheks- und Informationswissenschaft der Humboldt-Universität zu Berlin wurde wiki2rdf entwickelt, eine Software zur regelbasierten Extraktion von RDF-Tripeln aus den unstrukturierten Volltexten der Wikipedia. Die Extraktion erfolgt nach Syntax-Parsing mithilfe eines Dependency-Parsers. Exemplarisch wurde wiki2rdf auf 68820 Artikel aus der Kategorie „Wissenschaftler“ der deutschsprachigen Wikipedia angewandt. Es wurden 244563 Tripel extrahiert.


Sign in / Sign up

Export Citation Format

Share Document