scholarly journals COMPACT: Concurrent or Ordered Matrix-Based Packing Arrangement Computation Technique

2021 ◽  
Vol 11 (11) ◽  
pp. 5217
Author(s):  
Gokhan Serhat

Despite their versatility in treating irregular geometries, the raster methods have received limited attention in solving packing problems involving rotatable objects. In addition, raster approximation allows the use of unique performance metrics and indirect consideration of constraints, which have not been exploited in the literature. This study presents the Concurrent or Ordered Matrix-based Packing Arrangement Computation Technique (COMPACT). The method allows the objects to be rotated by arbitrary angles, unlike the right-angled rotation restrictions imposed in many existing packing optimization studies based on raster methods. The raster approximations are obtained through loop-free operations that improve efficiency. Additionally, a novel performance metric is introduced, which favors efficient filling of the available space by maximizing the overall contact within the domain. Moreover, the objective functions are exploited to discard the overlap and overflow constraints and enable the use of unconstrained optimization methods. The results of the case studies demonstrate the effectiveness of the proposed technique.

2014 ◽  
Vol 20 (2) ◽  
pp. 122-134 ◽  
Author(s):  
Kevin M. Taaffe ◽  
Robert William Allen ◽  
Lindsey Grigg

Purpose – Performance measurements or metrics are that which measure a company's performance and behavior, and are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. During the course of this study with Lockheed Martin, the research team was tasked with determining the effectiveness of the site's existing performance metrics that are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. The paper aims to discuss these issues. Design/methodology/approach – Research indicates that there are five key elements that influence the success of a performance metric. A standardized method of determining whether or not a metric has the right mix of these elements was created in the form of a metrics scorecard. Findings – The scorecard survey was successful in revealing good metric use, as well as problematic metrics. In the quality department, the Document Rejects metric has been reworked and is no longer within the executive's metric deck. It was also recommended to add root cause analysis, and to quantify and track the cost of non-conformance and the overall cost of quality. In total, the number of site wide metrics has decreased from 75 to 50 metrics. The 50 remaining metrics are undergoing a continuous improvement process in conjunction with the use of the metric scorecard tool developed in this research. Research limitations/implications – The metrics scorecard should be used site-wide for an assessment of all metrics. The focus of this paper is on the metrics within the quality department. Practical implications – Putting a quick and efficient metrics assessment technique in place was critical. With the leadership and participation of Lockheed Martin, this goal was accomplished. Originality/value – This paper presents the process of metrics evaluation and the issues that were encountered during the process, including insights that would not have been easily documented without this mechanism. Lockheed Martin Company has used results from this research. Other industries could also apply the methods proposed here.


Author(s):  
Lauren-Brooke Eisen ◽  
Miriam Aroni Krinsky

Local prosecutors are responsible for 95 percent of criminal cases in the United States—their charging decisions holding enormous influence over the number of people incarcerated and the length of sentences served. Performance metrics are a tool that can align the vision of elected prosecutors with the tangible actions of their offices’ line attorneys. The right metrics can provide clarity to individual line attorneys around the mission of the office and the goals of their job. Historically, however, prosecutor offices have relied on evaluation metrics that incentivize individual attorneys to prioritize more punitive responses and volume-driven activity—such as tracking the number of cases processed, indictments, guilty pleas, convictions, and sentence lengths. Under these past approaches, funding, budgeting, and promotional decisions are frequently linked to regressive measures that fail to account for just results. As more Americans have embraced the need to end mass incarceration, a new wave of reform-minded district attorneys have won elections. To ensure they are accountable to the voters who elected them into office and achieve the changes they championed, they must align measures of success with new priorities for their offices. New performance metrics predicated on the goals of reducing incarceration and enhancing fairness can shrink prison and jail populations, while improving public trust and promoting healthier and safer communities. The authors propose a new set of metrics for elected prosecutors to consider in designing performance evaluations, both for their offices and for individual attorneys. The authors also suggest that for these new performance measures to effectively drive decarceration practices, they must be coupled with careful, thoughtful implementation and critical data-management infrastructure.


2014 ◽  
Vol 984-985 ◽  
pp. 419-424
Author(s):  
P. Sabarinath ◽  
M.R. Thansekhar ◽  
R. Saravanan

Arriving optimal solutions is one of the important tasks in engineering design. Many real-world design optimization problems involve multiple conflicting objectives. The design variables are of continuous or discrete in nature. In general, for solving Multi Objective Optimization methods weight method is preferred. In this method, all the objective functions are converted into a single objective function by assigning suitable weights to each objective functions. The main drawback lies in the selection of proper weights. Recently, evolutionary algorithms are used to find the nondominated optimal solutions called as Pareto optimal front in a single run. In recent years, Non-dominated Sorting Genetic Algorithm II (NSGA-II) finds increasing applications in solving multi objective problems comprising of conflicting objectives because of low computational requirements, elitism and parameter-less sharing approach. In this work, we propose a methodology which integrates NSGA-II and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for solving a two bar truss problem. NSGA-II searches for the Pareto set where two bar truss is evaluated in terms of minimizing the weight of the truss and minimizing the total displacement of the joint under the given load. Subsequently, TOPSIS selects the best compromise solution.


2020 ◽  
Author(s):  
Jason G. Kralj ◽  
Stephanie L. Servetas ◽  
Samuel P. Forry ◽  
Scott A. Jackson

AbstractEvaluating the performance of metagenomics analyses has proven a challenge, due in part to limited ground-truth standards, broad application space, and numerous evaluation methods and metrics. Application of traditional clinical performance metrics (i.e. sensitivity, specificity, etc.) using taxonomic classifiers do not fit the “one-bug-one-test” paradigm. Ultimately, users need methods that evaluate fitness-for-purpose and identify their analyses’ strengths and weaknesses. Within a defined cohort, reporting performance metrics by taxon, rather than by sample, will clarify this evaluation. An estimated limit of detection, positive and negative control samples, and true positive and negative true results are necessary criteria for all investigated taxa. Use of summary metrics should be restricted to comparing results of similar cohorts and data, and should employ harmonic means and continuous products for each performance metric rather than arithmetic mean. Such consideration will ensure meaningful comparisons and evaluation of fitness-for-purpose.


2018 ◽  
Vol 10 (3) ◽  
Author(s):  
Nathan M. Cahill ◽  
Thomas Sugar ◽  
Yi Ren ◽  
Kyle Schroeder

Comparatively slow growth in energy density of both power storage and generation technologies has placed added emphasis on the need for energy-efficient designs in legged robots. This paper explores the potential of parallel springs in robot limb design. We start by adding what we call the exhaustive parallel compliance matrix (EPCM) to the design. The EPCM is a set of parallel springs, which includes a parallel spring for each joint and a multijoint parallel spring for all possible combinations of the robot's joints. Then, we carefully formulate and compare two performance metrics, which improve various aspects of the system performance. Each performance metric is analyzed and compared, their strengths and weaknesses being rigorously presented. The performance benefits associated with this approach are dramatic. Implementing the spring matrix reduces the sum of square power (SSP) exerted by the actuators by up to 47%, the peak power requirement by almost 40%, the sum of squared current by 55%, and the peak current by 55%. These results were generated using a planar robot limb and a gait trajectory borrowed from biology. We use a fully dynamic model of the robotic system including inertial effects. We also test the design robustness using a perturbation study, which shows that the parallel springs are effective even in the presence of trajectory perturbation.


2021 ◽  
Vol 25 (5) ◽  
pp. 1073-1098
Author(s):  
Nor Hamizah Miswan ◽  
Chee Seng Chan ◽  
Chong Guan Ng

Hospital readmission is a major cost for healthcare systems worldwide. If patients with a higher potential of readmission could be identified at the start, existing resources could be used more efficiently, and appropriate plans could be implemented to reduce the risk of readmission. Therefore, it is important to predict the right target patients. Medical data is usually noisy, incomplete, and inconsistent. Hence, before developing a prediction model, it is crucial to efficiently set up the predictive model so that improved predictive performance is achieved. The current study aims to analyse the impact of different preprocessing methods on the performance of different machine learning classifiers. The preprocessing applied by previous hospital readmission studies were compared, and the most common approaches highlighted such as missing value imputation, feature selection, data balancing, and feature scaling. The hyperparameters were selected using Bayesian optimisation. The different preprocessing pipelines were assessed using various performance metrics and computational costs. The results indicated that the preprocessing approaches helped improve the model’s prediction of hospital readmission.


Author(s):  
Jessica Taylor ◽  
Eliezer Yudkowsky ◽  
Patrick LaVictoire ◽  
Andrew Critch

This chapter surveys eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? The chapter focuses on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers. The questions surveyed include the following: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to “not have an overly large impact” or “not have many side effects”? The chapter discusses these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.


2019 ◽  
Vol 26 (9) ◽  
pp. 2023-2039
Author(s):  
Karim A. Iskandar ◽  
Awad S. Hanna ◽  
Wafik Lotfallah

Purpose Healthcare-sector projects are some of the most complex in modern practice due to their reliance on high-tech components and the level of precision they must maintain. Existing literature in healthcare performance specifically is scarce, but there is a recent increasing trend in both healthcare construction and a corresponding trend in related literature. No previously existing study has derived weights (relative importance) of performance metric in an objective, data-based manner. The purpose of this paper is to present a newly developed mathematical model that derives these weights, free of subjectivity that is common in other literature. Design/methodology/approach This paper’s model considers 17 exceptional projects and 19 average projects, and reveals the weights (or relative importance) of ten performance metrics by comparing how projects relate to one another in terms of each metric individually. It solves an eigenvalue problem that maximizes the difference between average and exceptional project performances. Findings The most significant weight, i.e. the performance metric which has the greatest impact on healthcare project performance, was request for information per million dollars with a weight of 16.07 percent. Other highly weighted metrics included construction speed and schedule growth at 13.08 and 12.23 percent, respectively. Rework was the least significant metric at 3.61 percent, but not all metrics of quality had low ratings. Deficiency issues per million dollars was weighted at 11.61 percent, for example. All weights derived by the model in this paper were validated statistically to ensure their applicability as comparison and assessment tools. Originality/value There is no widely accepted measure of project performance specific to healthcare construction. This study’s contribution to the body of knowledge is its mathematical model which is a landmark effort to develop a single, objective, unified project performance index for healthcare construction. Furthermore, this unified score presents a user-friendly avenue for contractors to standardize their productivity tracking – a missing piece in the practices of many contractors.


Author(s):  
John G. Michopoulos ◽  
Tomonari Furukawa ◽  
John C. Hermanson ◽  
Samuel G. Lambrakos

A hierarchical algorithmic and computational scheme based on a staggered design optimization approach is presented. This scheme is structured for unique characterization of many continuum systems and their associated datasets of experimental measurements related to their response characteristics. This methodology achieves both online (real-time) and offline design of optimum experiments required for characterization of the material system under consideration, while also achieving a constitutive characterization of the system. The approach assumes that mechatronic systems are available for exposing specimens to multidimensional loading paths and for the acquisition of data associated with stimulus and response behavior. Material characterization is achieved by minimizing the difference between system responses that are measured experimentally and predicted based on model representation. The performance metrics of the material characterization process are used to construct objective functions for the design of experiments at a higher-level optimization. The distinguishability and uniqueness of solutions that characterize the system are used as two of many possible measures adopted for construction of objective functions required for design of experiments. Finally, a demonstration of the methodology is presented that considers the best loading path of a two degree-of-freedom loading machine for characterization of the linear elastic constitutive response of anisotropic materials.


Sign in / Sign up

Export Citation Format

Share Document