Design Variety Measurement Using Sharma–Mittal Entropy

2020 ◽  
Vol 143 (6) ◽  
Author(s):  
Faez Ahmed ◽  
Sharath Kumar Ramachandran ◽  
Mark Fuge ◽  
Sam Hunter ◽  
Scarlett Miller

Abstract Design variety metrics measure how much a design space is explored. This article proposes that a generalized class of entropy metrics based on Sharma–Mittal entropy offers advantages over existing methods to measure design variety. We show that an exemplar metric from Sharma–Mittal entropy, namely, the Herfindahl–Hirschman index for design (HHID) has the following desirable advantages over existing metrics: (a) more accuracy: it better aligns with human ratings compared to existing and commonly used tree-based metrics for two new datasets; (b) higher sensitivity: it has higher sensitivity compared to existing methods when distinguishing between the variety of sets; (c) allows efficient optimization: it is a submodular function, which enables one to optimize design variety using a polynomial time greedy algorithm; and (d) generalizes to multiple metrics: many existing metrics can be derived by changing the parameters of this metric, which allows a researcher to fit the metric to better represent variety for new domains. This article also contributes a procedure for comparing metrics used to measure variety via constructing ground truth datasets from pairwise comparisons. Overall, our results shed light on some qualities that good design variety metrics should possess and the nontrivial challenges associated with collecting the data needed to measure those qualities.

Author(s):  
Faez Ahmed ◽  
Sharath Kumar Ramachandran ◽  
Mark Fuge ◽  
Sam Hunter ◽  
Scarlett Miller

Abstract In this paper, we propose a new design variety metric based on the Herfindahl index. We also propose a practical procedure for comparing variety metrics via the construction of ground truth datasets from pairwise comparisons by experts. Using two new datasets, we show that this new variety measure aligns with human ratings more than some existing and commonly used tree-based metrics. This metric also has three main advantages over existing metrics: a) It is a super-modular function, which enables us to optimize design variety using a polynomial time greedy algorithm. b) The parametric nature of this metric allows us to fit the metric to better represent variety for new domains. c) It has higher sensitivity in distinguishing between variety of sets of randomly selected designs than existing methods. Overall, our results shed light on some qualities that good design variety metrics should possess and the non-trivial challenges associated with collecting the data needed to measure those qualities.


Author(s):  
S. G. Wyse ◽  
G. T. Parks ◽  
R. S. Cant

Gas turbine combustor design entails multiple, and often contradictory, requirements for the designer to consider. Multiobjective optimisation on a low-fidelity linear-network-based code is suggested as a way of investigating the design space. The ability of the Tabu Search optimiser to minimise NOx and CO, as well as several acoustic objective functions, is investigated, and the resulting “good” design vectors presented. An analysis of the importance of the flame transfer function in the model is also given. The mass flow and the combustion chamber width and area are shown to be very important. The length of the plenum and the widths of the plenum exit and combustor exit also influence the design space.


2019 ◽  
Vol 3 (4) ◽  
pp. 72
Author(s):  
Bernhard Maurer ◽  
Verena Fuchsberger

Conventional digital and remote forms of play lack the physicality associated with analog play. Research on the materiality of boardgames has highlighted the inherent material aspects to this analog form of play and how these are relevant for the design of digital play. In this work, we analyze the inherent material qualities and related experiences of boardgames, and speculate how these might shift in remote manifestations. Based on that, we depict three lenses of designing for remote tangible play: physicality, agency, and time. These lenses present leverage points for future designs and illustrate how the digital and the physical can complement each other following alternative notions of hybrid digital–physical play. Based on that, we illustrate the related design space and discuss how boardgame qualities can be translated to the remote space, as well as how their characteristics might change. Thereby, we shed light on related design challenges and reflect on how designing for shared physicality can enrich dislocated play by applying these lenses.


2020 ◽  
Author(s):  
Elisabeth D. Hafner ◽  
Frank Techel ◽  
Silvan Leinss ◽  
Yves Bühler

Abstract. The spatial distribution and size of avalanches are essential parameters for avalanche warning, avalanche documentation, mitigation measure design and hazard zonation. Despite its importance, this information is incomplete today and only available for limited areas and limited time periods. Manual avalanche mapping from satellite imagery has recently been applied to reduce this gap achieving promising results. However, their reliability and completeness were not yet verified satisfactorily. In our study we attempt a full validation of the completeness of visually detected and mapped avalanches from optical SPOT-6, Sentinel-2 and radar Sentinel-1 imagery. We examine manually mapped avalanches from two avalanche periods in 2018 and 2019 for an area of approximately 180 km2 around Davos, Switzerland relying on ground- and helicopter-based photographs as ground truth. For the quality assessment, we investigate the Probability of Detection (POD) and the Positive Predictive Value (PPV). Additionally, we relate our results to conditions which potentially influence avalanche detection in the satellite imagery. We statistically confirm the high potential of SPOT for comprehensive avalanche mapping for selected periods (POD = 0.74, PPV = 0.88) as well as the reliability of Sentinel-1 for the mapping of larger avalanches (POD = 0.27, PPV = 0.87). Furthermore, we proof that Sentinel-2 is unsuitable for the mapping of most avalanches due to its spatial resolution (POD = 0.06, PPV = 0.81). Because we could apply the same reference avalanche events for all three satellite mappings, our validation results are robust and comparable. We demonstrate that satellite-based avalanche mapping has the potential to fill the existing avalanche documentation gap over large areas, making alpine regions safer.


2018 ◽  
Vol 141 (2) ◽  
Author(s):  
Faez Ahmed ◽  
Sharath Kumar Ramachandran ◽  
Mark Fuge ◽  
Samuel Hunter ◽  
Scarlett Miller

Assessing similarity between design ideas is an inherent part of many design evaluations to measure novelty. In such evaluation tasks, humans excel at making mental connections among diverse knowledge sets to score ideas on their uniqueness. However, their decisions about novelty are often subjective and difficult to explain. In this paper, we demonstrate a way to uncover human judgment of design idea similarity using two-dimensional (2D) idea maps. We derive these maps by asking participants for simple similarity comparisons of the form “Is idea A more similar to idea B or to idea C?” We show that these maps give insight into the relationships between ideas and help understand the design domain. We also propose that novel ideas can be identified by finding outliers on these idea maps. To demonstrate our method, we conduct experimental evaluations on two datasets—colored polygons (known answer) and milk frother sketches (unknown answer). We show that idea maps shed light on factors considered by participants in judging idea similarity and the maps are robust to noisy ratings. We also compare physical maps made by participants on a white-board to their computationally generated idea maps to compare how people think about spatial arrangement of design items. This method provides a new direction of research into deriving ground truth novelty metrics by combining human judgments and computational methods.


2016 ◽  
Vol 25 (05) ◽  
pp. 1640003 ◽  
Author(s):  
Yoav Liberman ◽  
Adi Perry

Visual tracking in low frame rate (LFR) videos has many inherent difficulties for achieving accurate target recovery, such as occlusions, abrupt motions and rapid pose changes. Thus, conventional tracking methods cannot be applied reliably. In this paper, we offer a new scheme for tracking objects in low frame rate videos. We present a method of integrating multiple metrics for template matching, as an extension for the particle filter. By inspecting a large data set of videos for tracking, we show that our method not only outperforms other related benchmarks in the field, but it also achieves better results both visually and quantitatively, once compared to actual ground truth data.


Author(s):  
Faez Ahmed ◽  
Mark Fuge ◽  
Sam Hunter ◽  
Scarlett Miller

Assessing similarity between design ideas is an inherent part of many design evaluations to measure novelty. In such evaluation tasks, humans excel at making mental connections among diverse knowledge sets and scoring ideas on their uniqueness. However, their decisions on novelty are often subjective and difficult to explain. In this paper, we demonstrate a way to uncover human judgment of design idea similarity using two dimensional idea maps. We derive these maps by asking humans for simple similarity comparisons of the form “Is idea A more similar to idea B or to idea C?” We show that these maps give insight into the relationships between ideas and help understand the domain. We also propose that the novelty of ideas can be estimated by measuring how far items are on these maps. We demonstrate our methodology through the experimental evaluations on two datasets of colored polygons (known answer) and milk frothers (unknown answer) sketches. We show that these maps shed light on factors considered by raters in judging idea similarity. We also show how maps change when less data is available or false/noisy ratings are provided. This method provides a new direction of research into deriving ground truth novelty metrics by combining human judgments and computational methods.


2019 ◽  
Vol 11 (06) ◽  
pp. 1950075
Author(s):  
Lei Lai ◽  
Qiufen Ni ◽  
Changhong Lu ◽  
Chuanhe Huang ◽  
Weili Wu

We consider the problem of maximizing monotone submodular function over the bounded integer lattice with a cardinality constraint. Function [Formula: see text] is submodular over integer lattice if [Formula: see text], [Formula: see text], where ∨ and ∧ represent elementwise maximum and minimum, respectively. Let [Formula: see text], and [Formula: see text], we study the problem of maximizing submodular function [Formula: see text] with constraints [Formula: see text] and [Formula: see text]. A random greedy [Formula: see text]-approximation algorithm and a deterministic [Formula: see text]-approximation algorithm are proposed in this paper. Both algorithms work in value oracle model. In the random greedy algorithm, we assume the monotone submodular function satisfies diminishing return property, which is not an equivalent definition of submodularity on integer lattice. Additionally, our random greedy algorithm makes [Formula: see text] value oracle queries and deterministic algorithm makes [Formula: see text] value oracle queries.


2021 ◽  
Vol 7 (2) ◽  
pp. 247-250
Author(s):  
Amr Abuzer ◽  
Ady Naber ◽  
Simon Hoffmann ◽  
Lucy Kessler ◽  
Ramin Khoramnia ◽  
...  

Abstract Optical Coherence Tomography Angiography (OCTA) is an imaging modality that provides threedimensional information of the retinal microvasculature and therefore promises early diagnosis and sufficient monitoring in ophthalmology. However, there is considerable variability between experts analysing this data. Measures for quantitative assessment of the vasculature need to be developed and established, such as fractal dimension. Fractal dimension can be used to assess the complexity of vessels and has been shown to be independently associated with neovascularization, a symptom of diseases such as diabetic retinopathy. This investigation assessed the performance of three fractal dimension algorithms: Box Counting Dimension (BCD), Information Dimension (ID), and Differential Box Counting (DBC). Two of those, BCD and ID, rely on previous vessel segmentation. Assessment of the added value or disturbance regarding the segmentation step is a second aim of this study. The investigation was performed on a data set composed of 9 in vivo human eyes. Since there is no ground truth available, the performance of the methods in differentiating the Superficial Vascular Complex (SVC) and Deep Vascular Complex (DVC) layers apart and the consistency of measurements of the same layer at different time-points were tested. The performance parameters were the ICC and the Mann- Whitney U tests. The three applied methods were suitable to tell the different layers apart and showed consistent values applied in the same slab. Within the consistency test, the non-segmentation-based method, DBC, was found to be less accurate, expressed in a lower ICC value, compared to its segmentation-based counterparts. This result is thought to be due to the DBC’s higher sensitivity when compared to the other methods. This higher sensitivity might help detect changes in the microvasculature, like neovascularization, but is also more likely prone to noise and artefacts.


Sign in / Sign up

Export Citation Format

Share Document