scholarly journals Tool Support for Inspecting the Code Quality of HPC Applications

Author(s):  
Thomas Panas ◽  
Dan Quinlan ◽  
Richard Vuduc
Keyword(s):  
2017 ◽  
Vol 8 (4) ◽  
pp. 51-71
Author(s):  
Sanjay Misra ◽  
Adewole Adewumi ◽  
Robertas Damasevicius ◽  
Rytis Maskeliunas

In order to maintain the quality of software, it is important to measure it complexity. This provides an insight into the degree of comprehensibility and maintainability of the software. Measurement can be carried out using cognitive measures which are based on cognitive informatics. A number of such measures have been proposed in literature. The goal of this article is to identify the features and advantages of the existing measures. In addition, a comparative analysis is done based on some selected criteria. The results show that there is a similar trend in the output obtained from the different measures when they are applied to different examples. This makes it easy for adopting organisations to readily choose from the options based on the availability of tool support.


2006 ◽  
Vol 35 (3) ◽  
Author(s):  
Raimundas Matulevičius ◽  
Patrick Heymans ◽  
Guttorm Sindre

Goal modelling usually takes place during the early information systems development phase known as requirements engineering (RE). RE is a key factor for project success where a good tool support is necessary. Several goal-modelling tools exist and several approaches can be used to evaluate them. In this paper, we report on an experiment to evaluate two goal-modelling tools - KAOS/Objectiver and i*/OME. We use an RE-tool evaluation approach (R-TEA) in order to determine which of the tools is better at supporting the creation of goal models. It turns out that KAOS/Objectiver apparently offers better model creation support but the quality of the resulting models is more dependent on situational language characteristics such as the focus on early (vs late) requirements.


Author(s):  
Eddie A Santos ◽  
Abram Hindle

Developers summarize their changes to code in commit messages. When a message seems “unusual,” however, this puts doubt into the quality of the code contained in the commit. We trained \(n\)-gram language models and used cross-entropy as an indicator of commit message “unusualness” of over 120 000 commits from open source projects. Build statuses collected from Travis-CI were used as a proxy for code quality. We then compared the distributions of failed and successful commits with regards to the “unusualness” of their commit message. Our analysis yielded significant results when correlating cross-entropy with build status.


2014 ◽  
Vol 2014 ◽  
pp. 1-30 ◽  
Author(s):  
Aws Magableh ◽  
Zarina Shukur ◽  
Noorazean Mohd. Ali

Unified Modeling Language is the most popular and widely used Object-Oriented modelling language in the IT industry. This study focuses on investigating the ability to expand UML to some extent to model crosscutting concerns (Aspects) to support AspectJ. Through a comprehensive literature review, we identify and extensively examine all the available Aspect-Oriented UML modelling approaches and find that the existing Aspect-Oriented Design Modelling approaches using UML cannot be considered to provide a framework for a comprehensive Aspectual UML modelling approach and also that there is a lack of adequate Aspect-Oriented tool support. This study also proposes a set of Aspectual UML semantic rules and attempts to generate AspectJ pseudocode from UML diagrams. The proposed Aspectual UML modelling approach is formally evaluated using a focus group to test six hypotheses regarding performance; a “good design” criteria-based evaluation to assess the quality of the design; and an AspectJ-based evaluation as a reference measurement-based evaluation. The results of the focus group evaluation confirm all the hypotheses put forward regarding the proposed approach. The proposed approach provides a comprehensive set of Aspectual UML structural and behavioral diagrams, which are designed and implemented based on a comprehensive and detailed set of AspectJ programming constructs.


Publications ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 13 ◽  
Author(s):  
Afshin Sadeghi ◽  
Sarven Capadisli ◽  
Johannes Wilm ◽  
Christoph Lange ◽  
Philipp Mayr

An increasing number of scientific publications are created in open and transparent peer review models: a submission is published first, and then reviewers are invited, or a submission is reviewed in a closed environment but then these reviews are published with the final article, or combinations of these. Reasons for open peer review include giving better credit to reviewers, and enabling readers to better appraise the quality of a publication. In most cases, the full, unstructured text of an open review is published next to the full, unstructured text of the article reviewed. This approach prevents human readers from getting a quick impression of the quality of parts of an article, and it does not easily support secondary exploitation, e.g., for scientometrics on reviews. While document formats have been proposed for publishing structured articles including reviews, integrated tool support for entire open peer review workflows resulting in such documents is still scarce. We present AR-Annotator, the Automatic Article and Review Annotator which employs a semantic information model of an article and its reviews, using semantic markup and unique identifiers for all entities of interest. The fine-grained article structure is not only exposed to authors and reviewers but also preserved in the published version. We publish articles and their reviews in a Linked Data representation and thus maximise their reusability by third party applications. We demonstrate this reusability by running quality-related queries against the structured representation of articles and their reviews.


Author(s):  
Stefan Vock ◽  
Hans Martin von Staudt

Abstract Typical mixed-signal ICs are approaching 1000 or even more parametric tests. These tests are usually coded in a procedural or a semi-object oriented language. The huge code base of the programs is a significant challenge for maintaining code quality which inherently translates into outgoing quality. The paper will present software metrics of typical mixedsignal power management and audio devices with regard to the number of tests conducted. It will be shown that classical ways to handle test programs are error prone and tend to systematically repeat known mistakes. The adoption of selected software engineering methods can avoid such mistakes and improves the productivity of the mixed-signal test generation. Results of a pilot project show significant productivity improvement. Open-source based software is employed to provide the necessary tool support. They establish a potential roadmap to become independent of proprietary tester specific tool sets.


Sign in / Sign up

Export Citation Format

Share Document