Utilizing Node Interference Method and Complex Network Centrality Metrics to Explore Requirement Change Propagation

Author(s):  
Phyo Htet Hein ◽  
Beshoy Morkos ◽  
Chiradeep Sen

Requirements play very important role in the design process as they specify how stakeholder expectations will be satisfied. Requirements are frequently revised, due to iterative nature of the design process. These changes, if not properly managed, may result in financial and time losses leading to project failure due to possible undesired propagating effect. Current modeling methods for managing requirements do not offer formal reasoning necessary to manage the requirement change and its propagation. Predictive models to assist designers in making well informed decisions prior to change implementation do not exist. Based on the premise that requirement networks can be utilized to study change propagation, this research will allow for investigation of complex network metrics for predicting change throughout the design process. Requirement change prediction ability during the design process may lead to valuable knowledge in designing artifacts more efficiently by minimizing unanticipated changes due to mismanaged requirements. Two research questions (RQs) described are addressed in this paper: RQ 1: Can complex network centrality metrics of a requirement network be utilized to predict requirement change propagation? RQ 2: How does complex network centrality metrics approach perform in comparison to the previously developed Automated Requirement Change Propagation Prediction (ARCPP) tool? Applying the notion of interference, requirement nodes in which change occurs are virtually removed from the network to simulate a change scenario and the changes in values of select metrics of all other nodes are observed. Based on the amount of metric value changes the remaining nodes experience, propagated requirement nodes are predicted. Counting betweenness centrality, left eigenvector centrality, and authority centrality serve as top performing metrics and their performances are comparative to ARCPP tool.

Author(s):  
Phyo Htet Hein ◽  
Elisabeth Kames ◽  
Cheng Chen ◽  
Beshoy Morkos

AbstractLack of planning when changing requirements to reflect stakeholders’ expectations can lead to propagated changes that can cause project failures. Existing tools cannot provide the formal reasoning required to manage requirement change and minimize unanticipated change propagation. This research explores machine learning techniques to predict requirement change volatility (RCV) using complex network metrics based on the premise that requirement networks can be utilized to study change propagation. Three research questions (RQs) are addressed: (1) Can RCV be measured through four classes namely, multiplier, absorber, transmitter, and robust, during every instance of change? (2) Can complex network metrics be explored and computed for each requirement during every instance of change? (3) Can machine learning techniques, specifically, multilabel learning (MLL) methods be employed to predict RCV using complex network metrics? RCV in this paper quantifies volatility for change propagation, that is, how requirements behave in response to the initial change. A multiplier is a requirement that is changed by an initial change and propagates change to other requirements. An absorber is a requirement that is changed by an initial change, but does not propagate change to other requirements. A transmitter is a requirement that is not changed by an initial change, but propagates change to other requirements. A robust requirement is a requirement that is not changed by an initial change and does not propagate change to other requirements. RCV is determined using industrial data and requirement network relationships obtained from previously developed Refined Automated Requirement Change Propagation Prediction (R-ARCPP) tool. Useful complex network metrics in highest performing machine learning models are discussed along with the limitations and future directions of this research.


Author(s):  
Natarajan Meghanathan

The author proposes the use of centrality-metrics to determine connected dominating sets (CDS) for complex network graphs. The author hypothesizes that nodes that are highly ranked by any of these four well-known centrality metrics (such as the degree centrality, eigenvector centrality, betweeness centrality and closeness centrality) are likely to be located in the core of the network and could be good candidates to be part of the CDS of the network. Moreover, the author aims for a minimum-sized CDS (fewer number of nodes forming the CDS and the core edges connecting the CDS nodes) while using these centrality metrics. The author discusses our approach/algorithm to determine each of these four centrality metrics and run them on six real-world network graphs (ranging from 34 to 332 nodes) representing various domains. The author observes the betweeness centrality-based CDS to be of the smallest size in five of the six networks and the closeness centrality-based CDS to be of the smallest size in the smallest of the six networks and incur the largest size for the remaining networks.


In this chapter, the author analyzes the assortativity of real-world networks based on centrality metrics (such as eigenvector centrality, betweenness centrality, and closeness centrality) other than degree centrality. They seek to evaluate the levels of assortativity (assortative, dissortative, neutral) observed for real-world networks with respect to the different centrality metrics and assess the similarity in these levels. The author observes real-world networks are more likely to be neutral (neither assortative nor dissortative) with respect to both R-DEG and BWC, and more likely to be assortative with respect to EVC and CLC. They observe the chances of a real-world network to be dissortative with respect to these centrality metrics to be very minimal. The author also assesses the extent to which they can use the assortativity index (A.Index) values obtained with a computationally light centrality metric to rank the networks in lieu of the A.Index values obtained with a computationally heavy centrality metric.


2021 ◽  
pp. 100893
Author(s):  
Isela-Elizabeth Tellez-Leon ◽  
Serafín Martínez-Jaramillo ◽  
Luis Escobar-Farfán ◽  
Ronald Hochreiter

Author(s):  
Qi D. Van Eikema Hommes

As the content and variety of technology increases in automobiles, the complexity of the system increases as well. Decomposing systems into modules is one of the ways to manage and reduce system complexity. This paper surveys and compares a number of state-of-art components modularity metrics, using 8 sample test systems. The metrics include Whitney Index (WI), Change Cost (CC), Singular value Modularity Index (SMI), Visibility-Dependency (VD) plot, and social network centrality measures (degree, distance, bridging). The investigation reveals that WI and CC form a good pair of metrics that can be used to assess component modularity of a system. The social network centrality metrics are useful in identifying areas of architecture improvements for a system. These metrics were further applied to two actual vehicle embedded software systems. The first system is going through an architecture transformation. The metrics from the old system revealed the need for the improvements. The second system was recently architected, and the metrics values showed the quality of the architecture as well as areas for further improvements.


Author(s):  
Natarajan Meghanathan

The authors present correlation analysis between the centrality values observed for nodes (a computationally lightweight metric) and the maximal clique size (a computationally hard metric) that each node is part of in complex real-world network graphs. They consider the four common centrality metrics: degree centrality (DegC), eigenvector centrality (EVC), closeness centrality (ClC), and betweenness centrality (BWC). They define the maximal clique size for a node as the size of the largest clique (in terms of the number of constituent nodes) the node is part of. The real-world network graphs studied range from regular random network graphs to scale-free network graphs. The authors observe that the correlation between the centrality value and the maximal clique size for a node increases with increase in the spectral radius ratio for node degree, which is a measure of the variation of the node degree in the network. They observe the degree-based centrality metrics (DegC and EVC) to be relatively better correlated with the maximal clique size compared to the shortest path-based centrality metrics (ClC and BWC).


Author(s):  
Natarajan Meghanathan

We present correlation analysis between the centrality values observed for nodes (a computationally lightweight metric) and the maximal clique size (a computationally hard metric) that each node is part of in complex real-world network graphs. We consider the four common centrality metrics: degree centrality (DegC), eigenvector centrality (EVC), closeness centrality (ClC) and betweenness centrality (BWC). We define the maximal clique size for a node as the size of the largest clique (in terms of the number of constituent nodes) the node is part of. The real-world network graphs studied range from regular random network graphs to scale-free network graphs. We observe that the correlation between the centrality value and the maximal clique size for a node increases with increase in the spectral radius ratio for node degree, which is a measure of the variation of the node degree in the network. We observe the degree-based centrality metrics (DegC and EVC) to be relatively better correlated with the maximal clique size compared to the shortest path-based centrality metrics (ClC and BWC).


The author proposes a centrality and topological sort-based formulation to quantify the relative contribution of courses in a curriculum network graph (CNG), a directed acyclic graph, comprising of the courses (as vertices), and their pre-requisites (captured as directed edges). The centrality metrics considered are out-degree and in-degree centrality along with betweenness centrality and eigenvector centrality. The author normalizes the values obtained for each centrality metric as well as the level numbers of the vertices in a topological sort of the CNG. The contribution score for a vertex is the weighted sum of the normalized values for the vertex. The author observes the betweenness centrality of the vertices (courses) to have the largest influence in the relative contribution scores of the courses that could be used as a measure of the weights to be given to the courses for curriculum assessment and student ranking as well as to cluster courses with similar contribution.


In this chapter, the authors analyze the correlation between the computationally light degree centrality (DEG) and local clustering coefficient complement-based degree centrality (LCC'DC) metrics vs. the computationally heavy betweenness centrality (BWC), eigenvector centrality (EVC), and closeness centrality (CLC) metrics. Likewise, they also analyze the correlation between the computationally light complement of neighborhood overlap (NOVER') and the computationally heavy edge betweenness centrality (EBWC) metric. The authors analyze the correlations at three different levels: pair-wise (Kendall's correlation measure), network-wide (Spearman's correlation measure), and linear regression-based prediction (Pearson's correlation measure). With regards to the node centrality metrics, they observe LCC'DC-BWC to be the most strongly correlated at all the three levels of correlation. For the edge centrality metrics, the authors observe EBWC-NOVER' to be strongly correlated with respect to the Spearman's correlation measure, but not with respect to the other two measures.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-22
Author(s):  
Natarajan Meghanathan

We seek to quantify the extent of similarity among nodes in a complex network with respect to two or more node-level metrics (like centrality metrics). In this pursuit, we propose the following unit disk graph-based approach: we first normalize the values for the node-level metrics (using the sum of the squares approach) and construct a unit disk graph of the network in a coordinate system based on the normalized values of the node-level metrics. There exists an edge between two vertices in the unit disk graph if the Euclidean distance between the two vertices in the normalized coordinate system is within a threshold value (ranging from 0 tok, where k is the number of node-level metrics considered). We run a binary search algorithm to determine the minimum value for the threshold distance that would yield a connected unit disk graph of the vertices. We refer to “1 − (minimum threshold distance/k)” as the node similarity index (NSI; ranging from 0 to 1) for the complex network with respect to the k node-level metrics considered. We evaluate the NSI values for a suite of 60 real-world networks with respect to both neighborhood-based centrality metrics (degree centrality and eigenvector centrality) and shortest path-based centrality metrics (betweenness centrality and closeness centrality).


Sign in / Sign up

Export Citation Format

Share Document