scholarly journals Modeling biological problems in computer science: a case study in genome assembly

2018 ◽  
Vol 20 (4) ◽  
pp. 1376-1383 ◽  
Author(s):  
Paul Medvedev

Abstract As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded.

2020 ◽  
Vol 72 (3) ◽  
pp. 305-319
Author(s):  
Dalibor Fiala ◽  
Lutz Bornmann

PurposeThe current article presents the results of a case study dealing with the historical roots of Eastern European researchers in computer science.Design/methodology/approachThe study is based on an analysis of cited references stemming from a collection of around 80,000 computer science papers by Eastern European researchers published from 1989 to 2014. By using a method called “reference publication year spectroscopy” (RPYS) for historical analyses based on bibliometric data, we analyze around 800,000 references cited in those papers. The study identifies the peak years, including most frequently cited publications (from 1952, 1965 and 1975), and focuses on these outstanding works for the field. The research shows how these influential papers were cited in Eastern Europe and in general, and on which scientific fields they have the most impact.FindingsA noteworthy publication that seems to have a tremendous effect on Eastern European computer science is Zadeh's “Fuzzy sets” article which appeared in Information and Control in 1965. The study demonstrates that computer scientists from Eastern Europe are more conservative in their citation behaviour and tend to refer to older and more established research than their counterparts from the West.Originality/valueWhich are the historical roots of researchers working in a particular field or on a specific topic? Are there certain publications – landmark papers – which are important for their research? We guess that these are questions bothering researchers in many fields.


Author(s):  
Zhigang Song ◽  
Jochonia Nxumalo ◽  
Manuel Villalobos ◽  
Sweta Pendyala

Abstract Pin leakage continues to be on the list of top yield detractors for microelectronics devices. It is simply manifested as elevated current with one pin or several pins during pin continuity test. Although many techniques are capable to globally localize the fault of pin leakage, root cause analysis and identification for it are still very challenging with today’s advanced failure analysis tools and techniques. It is because pin leakage can be caused by any type of defect, at any layer in the device and at any process step. This paper presents a case study to demonstrate how to combine multiple techniques to accurately identify the root cause of a pin leakage issue for a device manufactured using advanced technology node. The root cause was identified as under-etch issue during P+ implantation hard mask opening for ESD protection diode, causing P+ implantation missing, which was responsible for the nearly ohmic type pin leakage.


Author(s):  
Abeer A. Amer ◽  
Soha M. Ismail

The following article has been withdrawn on the request of the author of the journal Recent Advances in Computer Science and Communications (Recent Patents on Computer Science): Title: Diabetes Mellitus Prognosis Using Fuzzy Logic and Neural Networks Case Study: Alexandria Vascular Center (AVC) Authors: Abeer A. Amer and Soha M. Ismail* Bentham Science apologizes to the readers of the journal for any inconvenience this may cause BENTHAM SCIENCE DISCLAIMER: It is a condition of publication that manuscripts submitted to this journal have not been published and will not be simultaneously submitted or published elsewhere. Furthermore, any data, illustration, structure or table that has been published elsewhere must be reported, and copyright permission for reproduction must be obtained. Plagiarism is strictly forbidden, and by submitting the article for publication the authors agree that the publishers have the legal right to take appropriate action against the authors, if plagiarism or fabricated information is discovered. By submitting a manuscript, the authors agree that the copyright of their article is transferred to the publishers if and when the article is accepted for publication.


2021 ◽  
pp. 030631272110109
Author(s):  
Ole Pütz

The formulation of computer algorithms requires the elimination of vagueness. This elimination of vagueness requires exactness in programming, and this exactness can be traced to meeting talk, where it intersects with the indexicality of expressions. This article is concerned with sequences in which a team of computer scientists discuss the functionality of prototypes that are already implemented or possibly to be implemented. The analysis focuses on self-repair because this is a practice where participants can be seen to orient to meanings of different expressions as alternatives. By using self-repair, the computer scientists show a concern with exact descriptions when they talk about existing functionality of their prototypes but not when they talk about potential future functionality. Instead, when participants talk about potential future functionality and attend to meanings during self-repair, they use vague expressions to indicate possibilities. Furthermore, when the computer scientists talk to external stakeholders, they indicate through hedges whenever their descriptions approximate already implemented technical functionality but do not describe it exactly. The article considers whether the code of working prototypes can be said to fix meanings of expressions and how we may account for human agency and non-human resistances during development.


2014 ◽  
Vol 7 (3) ◽  
pp. 291-301 ◽  
Author(s):  
Maria-Blanca Ibanez ◽  
Angela Di-Serio ◽  
Carlos Delgado-Kloos

Languages ◽  
2021 ◽  
Vol 6 (3) ◽  
pp. 128
Author(s):  
Mike Turner

In this article I explore how typological approaches can be used to construct novel classification schemes for Arabic dialects, taking the example of definiteness as a case study. Definiteness in Arabic has traditionally been envisioned as an essentially binary system, wherein definite substantives are marked with a reflex of the article al- and indefinite ones are not. Recent work has complicated this model, framing definiteness instead as a continuum along which speakers can locate referents using a broader range of morphological and syntactic strategies, including not only the article al-, but also reflexes of the demonstrative series and a diverse set of ‘indefinite-specific’ articles found throughout the spoken dialects. I argue that it is possible to describe these strategies with even more precision by modeling them within cross-linguistic frameworks for semantic typology, among them a model known as the ‘Reference Hierarchy,’ which I adopt here. This modeling process allows for classification of dialects not by the presence of shared forms, but rather by parallel typological configurations, even if the forms within them are disparate.


2021 ◽  
Vol 20 (01) ◽  
pp. 2150011
Author(s):  
Worapan Kusakunniran ◽  
Thearith Ponn ◽  
Nuttapol Boonsom ◽  
Suwimol Wahakit ◽  
Kittikhun Thongkanchorn

This paper develops the Scopus H5-Index rankings, using the field of computer science as a case study. The challenge begins with the inconsistency of conference names. The rule-based approach is invented to automatically clean up duplicate conferences and assign unique pseudo ID for each conference. This data cleansing process is applied on conference names retrieved from both Scopus and ERA/CORE, in order to share common pseudo IDs for the sake of correlation analysis. The proposed data cleansing process is validated using ERA 2010 and CORE 2018 as references and reports the very small errors of 0.6% and 0.4%, respectively. Then, the Scopus H5-Index 2006–2010 and Scopus H5-Index 2014–2018 rankings are constructed and compared with the existing ERA 2010 and CORE 2018 rankings, respectively. The results show that the correlation within the Scopus H5-Index rankings (i.e. Scopus H5-Index 2006–2010 and Scopus H5-Index 2014–2018) is at the top of the moderate correlation band, where the correlation within the ERA/CORE rankings (ERA 2010 and CORE 2018) is at the top of the strong correlation band. While the correlations across ranking systems (i.e. Scopus H5-Index 2006–2010 vs. ERA 2010, and Scopus H5-Index 2014–2018 vs. CORE 2018) are at the bottom and middle of the moderate correlation band. It can be said that the quality assessment using the Scopus H5-Index ranking is more dynamic and quickly up-to-date when compared with the ERA/CORE ranking. Also, these two ranking systems are moderately correlated with each other for both periods of 2010 and 2018.


2021 ◽  
Vol 9 (1) ◽  
pp. 238-245
Author(s):  
Feiheng Luo ◽  
Aixin Sun ◽  
Aravind Sesagiri Raamkumar ◽  
Mojisola Erdt ◽  
Yin-Leng Theng

Sign in / Sign up

Export Citation Format

Share Document