scholarly journals Primal/Dual Mesh with Application to Triangular/Simplex Mesh and Delaunay/Voronoi

2012 ◽  
Author(s):  
Humayun Irshad ◽  
Stephane Rigaud ◽  
Alexandre Gouaillard

This document describes an extension of ITK to handle both primal and dual meshes simultaneously. This paper describe in particular the data structure, an extension of itk::QuadEdgeMesh, a filter to compute and add to the the structure the dual of an existing mesh, and an adaptor which let a down- ward pipeline process the dual mesh as if it was a native itk::QuadEdgeMesh. The new data structure, itk::QuadEdgeMeshWithDual, is an extension of the already existing itk::QuadEdgeMesh, which already included by default the due topology, to handle dual geometry as well. Two types of primal meshes have been specifically illustrated: triangular / simplex meshes and Voronoi / Delaunay. A functor mechanism has been implemented to allow for different kind of computation of the dual geometry. This paper is accompanied with the source code and examples.

2014 ◽  
Vol 33 ◽  
pp. 65-75
Author(s):  
HK Das ◽  
M Babul Hasan

In this paper, we study the methodology of primal dual solutions in Linear Programming (LP) & Linear Fractional Programming (LFP) problems. A comparative study is also made on different duals of LP & LFP. We then develop an improved decomposition approach for showing the relationship of primal and dual approach of LP & LFP problems by giving algorithm. Numerical examples are given to demonstrate our method. A computer programming code is also developed for showing primal and dual decomposition approach of LP & LFP with proper instructions using AMPL. Finally, we have drawn a conclusion stating the privilege of our method of computation. GANIT J. Bangladesh Math. Soc. Vol. 33 (2013) 65-75 DOI: http://dx.doi.org/10.3329/ganit.v33i0.17660


Author(s):  
Heng Li ◽  
Jiazhen Rong

Abstract Summary We present bedtk, a new toolkit for manipulating genomic intervals in the BED format. It supports sorting, merging, intersection, subtraction and the calculation of the breadth of coverage. Bedtk uses implicit interval tree, a data structure for fast interval overlap queries. It is several to tens of times faster than existing tools and tends to use less memory. Availability and implementation The source code is available at https://github.com/lh3/bedtk.


2015 ◽  
Vol 45 (1) ◽  
pp. 42
Author(s):  
Dmytro S. Morozov ◽  
Vitalii Ye. Zaitsev

The research paper outlines the problem of organization collaboration of users group on creation distance learning courses. The article contains analysis of the courses data structure. According to proposed structure the model of developer’s collaboration on creating distance learning courses based on basic principles of source code management was proposed. The article also provides result of research on necessary tools for collaborative development of courses in distance learning platforms. According to the requirements of flexibility and simplicity of access to system for any level educational institutions, technological decisions on granting permissions on performing basic operations on course elements and providing to user moderation’s privileges were proposed.


Author(s):  
Hao Ren ◽  
Wentao Mo ◽  
Guang Zhao ◽  
Dangpei Ren ◽  
Shuo Liu

Using software automation technology can significantly improve the quality and productivity of nuclear power software development. Based on the ‘tree’ data structure, this paper proposed Breadth First Search (BFS) based nuclear power software source code framework automatic generation algorithm called CFAA (Code Framework Automation Algorithm). CFAA uses ‘tree’ data structure to represent architecture of nuclear power software, then utilizes BFS to traverse all tree nodes to generate software source code framework. CFAA enables programmers to focus more on nuclear power software architecture design and optimization, and then generate the skeleton source code automatically. CFAA has been applied to COSINE (Core and System Integrated Engine for design and analysis) software development. Practice proved that CFAA can improve the efficiency of building nuclear power software framework, while reducing the defect rate of nuclear power software development.


2016 ◽  
Vol 2 ◽  
pp. e49 ◽  
Author(s):  
Stefan Wagner ◽  
Asim Abdulkhaleq ◽  
Ivan Bogicevic ◽  
Jan-Peter Ostberg ◽  
Jasmin Ramadani

Background. Today, redundancy in source code, so-called “clones” caused by copy&paste can be found reliably using clone detection tools. Redundancy can arise also independently, however, not caused by copy&paste. At present, it is not clear how onlyfunctionally similar clones(FSC) differ from clones created by copy&paste. Our aim is to understand and categorise the syntactical differences in FSCs that distinguish them from copy&paste clones in a way that helps clone detection research.Methods. We conducted an experiment using known functionally similar programs in Java and C from coding contests. We analysed syntactic similarity with traditional detection tools and explored whether concolic clone detection can go beyond syntax. We ran all tools on 2,800 programs and manually categorised the differences in a random sample of 70 program pairs.Results. We found no FSCs where complete files were syntactically similar. We could detect a syntactic similarity in a part of the files in <16% of the program pairs. Concolic detection found 1 of the FSCs. The differences between program pairs were in the categories algorithm, data structure, OO design, I/O and libraries. We selected 58 pairs for an openly accessible benchmark representing these categories.Discussion. The majority of differences between functionally similar clones are beyond the capabilities of current clone detection approaches. Yet, our benchmark can help to drive further clone detection research.


2021 ◽  
Vol 9 (1) ◽  
pp. 105-121
Author(s):  
Richard Barnes ◽  
Kerry L. Callaghan ◽  
Andrew D. Wickert

Abstract. Depressions – inwardly draining regions – are common to many landscapes. When there is sufficient moisture, depressions take the form of lakes and wetlands; otherwise, they may be dry. Hydrological flow models used in geomorphology, hydrology, planetary science, soil and water conservation, and other fields often eliminate depressions through filling or breaching; however, this can produce unrealistic results. Models that retain depressions, on the other hand, are often undesirably expensive to run. In previous work we began to address this by developing a depression hierarchy data structure to capture the full topographic complexity of depressions in a region. Here, we extend this work by presenting the Fill–Spill–Merge algorithm that utilizes our depression hierarchy data structure to rapidly process and distribute runoff. Runoff fills depressions, which then overflow and spill into their neighbors. If both a depression and its neighbor fill, they merge. We provide a detailed explanation of the algorithm and results from two sample study areas. In these case studies, the algorithm runs 90–2600 times faster (with a reduction in compute time of 2000–63 000 times) than the commonly used Jacobi iteration and produces a more accurate output. Complete, well-commented, open-source code with 97 % test coverage is available on GitHub and Zenodo.


2009 ◽  
Vol 19 (1) ◽  
pp. 123-132 ◽  
Author(s):  
Nikolaos Samaras ◽  
Angelo Sifelaras ◽  
Charalampos Triantafyllidis

The aim of this paper is to present a new simplex type algorithm for the Linear Programming Problem. The Primal - Dual method is a Simplex - type pivoting algorithm that generates two paths in order to converge to the optimal solution. The first path is primal feasible while the second one is dual feasible for the original problem. Specifically, we use a three-phase-implementation. The first two phases construct the required primal and dual feasible solutions, using the Primal Simplex algorithm. Finally, in the third phase the Primal - Dual algorithm is applied. Moreover, a computational study has been carried out, using randomly generated sparse optimal linear problems, to compare its computational efficiency with the Primal Simplex algorithm and also with MATLAB's Interior Point Method implementation. The algorithm appears to be very promising since it clearly shows its superiority to the Primal Simplex algorithm as well as its robustness over the IPM algorithm.


2015 ◽  
Author(s):  
Stefan Wagner ◽  
Asim Abdulkhaleq ◽  
Ivan Bogicevic ◽  
Jan-Peter Ostberg ◽  
Jasmin Ramadani

Background. Today, redundancy in source code, so-called “clones”, caused by copy&paste can be found reliably using clone detection tools. Redundancy can arise also independently, however, caused not by copy&paste. At present, it is not clear how only functionally similar clones (FSC) differ from clones created by copy&paste. Our aim is to understand and categorise the differences in FSCs that distinguish them from copy&paste clones in a way that helps clone detection research. Methods. We conducted an experiment using known functionally similar programs in Java and C from coding contests. We analysed syntactic similarity with traditional detection tools and explored whether concolic clone detection can go beyond syntax. We ran all tools on 2,800 programs and manually categorised the differences in a random sample of 70 program pairs. Results. We found no FSCs where complete files were syntactically similar. We could detect a syntactic similarity in a part of the files in < 16 % of the program pairs. Concolic detection found 1 of the FSCs. The differences between program pairs were in the categories algorithm, data structure, OO design, I/O and libraries. We selected 58 pairs for an openly accessible benchmark representing these categories. Discussion. The majority of differences between functionally similar clones are beyond the capabilities of current clone detection approaches. Yet, our benchmark can help to drive further clone detection research.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
J. A. Marmolejo ◽  
R. Rodríguez ◽  
O. Cruz-Mejia ◽  
J. Saucedo

A method to solve the design of a distribution network for bottled drinks company is introduced. The distribution network proposed includes three stages: manufacturing centers, consolidation centers using cross-docking, and distribution centers. The problem is formulated using a mixed-integer programming model in the deterministic and single period contexts. Because the problem considers several elements in each stage, a direct solution is very complicated. For medium-to-large instances the problem falls into large scale. Based on that, a primal-dual decomposition known as cross decomposition is proposed in this paper. This approach allows exploring simultaneously the primal and dual subproblems of the original problem. A comparison of the direct solution with a mixed-integer lineal programming solver versus the cross decomposition is shown for several randomly generated instances. Results show the good performance of the method proposed.


Sign in / Sign up

Export Citation Format

Share Document