scholarly journals SPL Features Quantification and Selection Based on Multiple Multi-Level Objectives

2019 ◽  
Vol 9 (11) ◽  
pp. 2212 ◽  
Author(s):  
Fazal Qudus Khan ◽  
Shahrulniza Musa ◽  
Georgios Tsaramirsis ◽  
Seyed M. Buhari

Software Product Lines (SPLs) can aid modern ecosystems by rapidly developing large-scale software applications. SPLs produce new software products by combining existing components that are considered as features. Selection of features is challenging due to the large number of competing candidate features to choose from, with different properties, contributing towards different objectives. It is also a critical part of SPLs as they have a direct impact on the properties of the product. There have been a number of attempts to automate the selection of features. However, they offer limited flexibility in terms of specifying objectives and quantifying datasets based on these objectives, so they can be used by various selection algorithms. In this research we introduce a novel feature selection approach that supports multiple multi-level user defined objectives. A novel feature quantification method using twenty operators, capable of treating text-based and numeric values and three selection algorithms called Falcon, Jaguar, and Snail are introduced. Falcon and Jaguar are based on greedy algorithm while Snail is a variation of exhaustive search algorithm. With an increase in 4% execution time, Jaguar performed 6% and 8% better than Falcon in terms of added value and the number of features selected.

Author(s):  
M.C. Rajalakshmi ◽  
A.P. Gnana Prakash

The paper presents a technique called as Mobility-enabled Multi Level Optimization (MeMLO) that addressing the existing problem of clustering in wireless sensor net-work (WSN). The technique enables selection of aggregator node based on multiple optimi-zation attribute which gives better decision capability to the clustering mechanism by choosing the best aggregator node. The outcome of the study shows MeMLO is highly capable of minimizing the halt time of mobile node that significantly lowers the transmit power of aggregator node. The simulation outcome shows negligible computational com-plexity, faster response time, and highly energy efficient for large scale WSN for longer simulation rounds as compared to conventional LEACH algorithm.


1996 ◽  
Vol 76 (06) ◽  
pp. 0939-0943 ◽  
Author(s):  
B Boneu ◽  
G Destelle ◽  

SummaryThe anti-aggregating activity of five rising doses of clopidogrel has been compared to that of ticlopidine in atherosclerotic patients. The aim of this study was to determine the dose of clopidogrel which should be tested in a large scale clinical trial of secondary prevention of ischemic events in patients suffering from vascular manifestations of atherosclerosis [CAPRIE (Clopidogrel vs Aspirin in Patients at Risk of Ischemic Events) trial]. A multicenter study involving 9 haematological laboratories and 29 clinical centers was set up. One hundred and fifty ambulatory patients were randomized into one of the seven following groups: clopidogrel at doses of 10, 25, 50,75 or 100 mg OD, ticlopidine 250 mg BID or placebo. ADP and collagen-induced platelet aggregation tests were performed before starting treatment and after 7 and 28 days. Bleeding time was performed on days 0 and 28. Patients were seen on days 0, 7 and 28 to check the clinical and biological tolerability of the treatment. Clopidogrel exerted a dose-related inhibition of ADP-induced platelet aggregation and bleeding time prolongation. In the presence of ADP (5 \lM) this inhibition ranged between 29% and 44% in comparison to pretreatment values. The bleeding times were prolonged by 1.5 to 1.7 times. These effects were non significantly different from those produced by ticlopidine. The clinical tolerability was good or fair in 97.5% of the patients. No haematological adverse events were recorded. These results allowed the selection of 75 mg once a day to evaluate and compare the antithrombotic activity of clopidogrel to that of aspirin in the CAPRIE trial.


2019 ◽  
Vol 5 (2) ◽  
pp. 83-99
Author(s):  
Francisco Jesús Ferreiro Seoane ◽  
Manuel Octavio Del Campo Villares

Background: The objective of this article is to analyse if there are significant relationships between the most valuable companies operating in Spain regarding professional performance, according to nationality and location within their Autonomous Communities or any superior aggrupation. To do that, a sample of 100 companies has been selected. Methods: The methodology followed is based on the selection of the 100 highestvalued companies from the point of view of Human Resources’ policy for the period 2013-2016 and through the measurement of six factors: Talent Management, Retribution, Work environment, CSR, Training and Employees’ perception, and classified by nationality and location. The study was based on 12 hypotheses, using the Unifactorial Variance’s Analysis, Pearson correlations and regressions. One limitation could be the fact that this study refers to a particular period, focusing on Spain and the variables mentioned, based on questionnaires. The added value of this work lies on the newness as it has a quantitative character, and on the fact that most of the hypotheses do not comply. Results and Conclusion: This allows to deny certain beliefs that affirm that European and American companies operating in Spain are more attractive than the Spanish or the Mediterranean ones.


Author(s):  
Juan de Lara ◽  
Esther Guerra

AbstractModelling is an essential activity in software engineering. It typically involves two meta-levels: one includes meta-models that describe modelling languages, and the other contains models built by instantiating those meta-models. Multi-level modelling generalizes this approach by allowing models to span an arbitrary number of meta-levels. A scenario that profits from multi-level modelling is the definition of language families that can be specialized (e.g., for different domains) by successive refinements at subsequent meta-levels, hence promoting language reuse. This enables an open set of variability options given by all possible specializations of the language family. However, multi-level modelling lacks the ability to express closed variability regarding the availability of language primitives or the possibility to opt between alternative primitive realizations. This limits the reuse opportunities of a language family. To improve this situation, we propose a novel combination of product lines with multi-level modelling to cover both open and closed variability. Our proposal is backed by a formal theory that guarantees correctness, enables top-down and bottom-up language variability design, and is implemented atop the MetaDepth multi-level modelling tool.


2021 ◽  
Vol 13 (6) ◽  
pp. 3571
Author(s):  
Bogusz Wiśnicki ◽  
Dorota Dybkowska-Stefek ◽  
Justyna Relisko-Rybak ◽  
Łukasz Kolanda

The paper responds to research problems related to the implementation of large-scale investment projects in waterways in Europe. As part of design and construction works, it is necessary to indicate river ports that play a major role within the European transport network as intermodal nodes. This entails a number of challenges, the cardinal one being the optimal selection of port locations, taking into account the new transport, economic, and geopolitical situation that will be brought about by modernized waterways. The aim of the paper was to present an original methodology for determining port locations for modernized waterways based on non-cost criteria, as an extended multicriteria decision-making method (MCDM) and employing GIS (Geographic Information System)-based tools for spatial analysis. The methodology was designed to be applicable to the varying conditions of a river’s hydroengineering structures (free-flowing river, canalized river, and canals) and adjustable to the requirements posed by intermodal supply chains. The method was applied to study the Odra River Waterway, which allowed the formulation of recommendations regarding the application of the method in the case of different river sections at every stage of the research process.


2021 ◽  
Vol 22 (15) ◽  
pp. 7773
Author(s):  
Neann Mathai ◽  
Conrad Stork ◽  
Johannes Kirchmair

Experimental screening of large sets of compounds against macromolecular targets is a key strategy to identify novel bioactivities. However, large-scale screening requires substantial experimental resources and is time-consuming and challenging. Therefore, small to medium-sized compound libraries with a high chance of producing genuine hits on an arbitrary protein of interest would be of great value to fields related to early drug discovery, in particular biochemical and cell research. Here, we present a computational approach that incorporates drug-likeness, predicted bioactivities, biological space coverage, and target novelty, to generate optimized compound libraries with maximized chances of producing genuine hits for a wide range of proteins. The computational approach evaluates drug-likeness with a set of established rules, predicts bioactivities with a validated, similarity-based approach, and optimizes the composition of small sets of compounds towards maximum target coverage and novelty. We found that, in comparison to the random selection of compounds for a library, our approach generates substantially improved compound sets. Quantified as the “fitness” of compound libraries, the calculated improvements ranged from +60% (for a library of 15,000 compounds) to +184% (for a library of 1000 compounds). The best of the optimized compound libraries prepared in this work are available for download as a dataset bundle (“BonMOLière”).


Risks ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 55
Author(s):  
Halina Sobocka-Szczapa

The aim of this article is to present the risk model premises related to worker recruitment. Recruitment affects the final selection of workers, whose activities contribute to corporate competitive advantages. Hiring unfavorable workers can influence the results produced by an organization. This risk mostly affects situations when searching for workers via the external labor market, although it can also affect internal recruitment. Therefore, it is necessary to attempt to identify recruitment risk determinants and classify their meaning in such processes. Model formation has both theoretical and intuitive characteristics. Model dependencies and their characteristics are identified in this paper. We attempted to assess the usability of the risk model for economic praxis. The analyses and results provide a model identification of dependencies between the factors determining a workers recruitment process and the risk which is caused by this process (employing inadequate workers who do not meet the employer’s expectations). The identification of worker recruitment process determinants should allow for practically reducing the risk of employing an inadequate worker and contribute to the reduction in unfavorable recruitment processes. The added value of this publication is the complex identification of recruitment process risk determinants and dependency formulations in a model form.


2021 ◽  
Vol 288 ◽  
pp. 125519
Author(s):  
Carole Brunet ◽  
Oumarou Savadogo ◽  
Pierre Baptiste ◽  
Michel A. Bouchard ◽  
Céline Cholez ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document