Variability and Component Criticality in Component Reuse and Remanufacturing Systems

Author(s):  
Vijitashwa Pandey ◽  
Deborah Thurston

Different operations, such as take-back, cleaning, and repair, lead to high system variability rendering remanufacturing systems difficult to manage. Even when a product is successfully remanufactured, there remains the problem of customer perception of remanufactured products being not able to perform as well as new ones. The possibility of several different options (reusing, remanufacturing, and recycling) further compound the complexity of the information set that should be considered for effective remanufacturing. This paper develops a method that can be employed for making component level decisions that accounts for aforesaid issues. A metric is proposed that measures the randomness or variability imposed by a reuse alternative. A measure of effective age is also proposed, extending the lines of previous research. A washing machine example illustrates the method and how the two measures can be incorporated into a design decision model.

Author(s):  
Vijitashwa Pandey ◽  
Deborah Thurston

Product take-back and remanufacturing systems are difficult to implement cost-effectively. Two contributing factors to this problem are the complex nature of the interrelationships among components, and their high degree of variability. Legislated take-back mandates have made it imperative for manufacturers to realize when there is value to be recovered in components and when there is not. This paper proposes a component criticality method to help ascertain this remaining value. We also develop a metric that measures the randomness or variability that a reuse alternative imposes on the remanufacturing system. A case study on washing machines illustrates how the two measures can be incorporated into a design decision model, can help reduce the complexity of reuse operations, and result in a superior design solution.


Author(s):  
Vijitashwa Pandey ◽  
Deborah Thurston

Design for multiple product lifecycles with component reuse potentially improves profitability, customer satisfaction and environmental impact. However, deciding on the scope and the level of detail (granularity) to be considered in the design process can be challenging. Although a comprehensive model that takes into account all important issues would be immensely useful, modeling difficulties and computational intractability prevent their successful implementation. This paper extends the scope of a previously developed design decision tool for determining optimal end-of-lifecycle decisions. The single product case is extended to a product portfolio, which has been shown to capture more demand. Demand is explicitly considered and its modeling is accomplished with the use of copulas. An important result from statistics, Sklar’s theorem, provides a way to use data from existing product sales to estimate demand for currently nonexistent reused products. In addition, effective age calculations are updated. On the computational front, time-continuation and seeding is used for NSGA-II to converge to optima more quickly in the resulting larger problem. A personal computer case study illustrates the effect of different parameters such as portfolio size, the possibility of recycle, and limits on environmental impact (as opposed to mandated take-back).


2009 ◽  
Vol 131 (3) ◽  
Author(s):  
Vijitashwa Pandey ◽  
Deborah Thurston

Product take-back and remanufacturing systems are difficult to implement cost effectively. One contributing factor is the complex nature of the inter-relationships among components of a product. Modeling of these relationships helps determine the product’s overall performance as a function of the performances of individual components. Reliability, a commonly used measure of performance, is a good measure of the physical failure rate, but it does not always reflect value degradation as experienced by customers or experts. As a result, it is difficult to define the effective performance of remanufactured products when some components are reused while others are not. Legislated take-back mandates across the world increasingly make it necessary to understand this perceived performance. In this paper we propose a method for combining customers’/experts’ assessments of value degradation using the maximum entropy principle. This value degradation information is then coupled with the components’ failure rate information. A method for modeling performance of a product that is comprised of components of different ages is presented. Overall performance is measured in units of time (effective age) by aligning with that of a product that has never been disassembled. We demonstrate the approach using a personal computer as example.


Author(s):  
Vijitashwa Pandey ◽  
Deborah Thurston

Component level reuse enables retention of value from products recovered at the end of their first lifecycle. Reuse strategies determined at the beginning of the lifecycle are aimed at maximizing this recovered value. Decision based design can be employed, but there are several difficulties in large scale implementation. First, computational complexities arise. Even with a product with a relatively small number of components, it becomes difficult to find the optimal component level decisions. Second, if there is more than one stakeholder involved, each interested in different attributes, the problem becomes even more difficult, due both to complexity and Arrow’s Impossibility Theorem. However, while the preferences of the stakeholders may not be known precisely, and aggregating those preferences poses difficulties, what is usually known is the partial ordering of alternatives. This paper presents a method for exploiting the features of a solution algorithm to address these difficulties in implementing decision based design. Heuristic methods including non-dominated sorting genetic algorithms (NSGA) can exploit this partial ordering and reject dominated alternatives, simplifying the problem. Including attributes of interest to various stakeholders ensures that the solutions found are practicable. One of the reasons product reuse has not achieved critical acceptance is because the three entities involved, the customers, the manufacturer and the government do not have a common ground. This results in inaccurate aggregating of attributes which the proposed method avoids. We illustrate our approach with a case study of component reuse of personal computers.


Author(s):  
Natesh Chandrashekar ◽  
Sundar Krishnamurty

This paper deals with the development of simulation-based design models under uncertainty, and presents an approach for building surrogate models and validating them for their efficacy and relevance from a design decision perspective. Specifically, this work addresses the fundamental research issue of how to build such surrogate models that are computationally efficient and sufficiently accurate, and meaningful from the viewpoint of its subsequent use in design. Towards this goal, this work presents a Bayesian analysis based iterative model building and model validation process leading to reliable and accurate surrogate models, which can then be invoked in the final design optimization phase. The resulting surrogate models can be expected to act as abstractions or idealizations of the engineering analysis models and can mimic system performance in a computationally efficient manner to facilitate design decisions under uncertainty. This is accomplished by first building initial models, and then refining and validating them over many stages, in line with the iterative nature of the engineering design process. Salient features of this work include the introduction of a novel preference-based design screening strategy nested in an optimally-selected prior information set for validation purposes; and the use of a Bayesian evaluation based model-updating technique to capture new information and enhance model’s value and effectiveness. A case study of the design of a windshield wiper arm is used to demonstrate the overall methodology and the results are discussed.


Author(s):  
Kevin Weinert ◽  
Vijitashwa Pandey ◽  
Sara Naranjo Corona ◽  
Aleksander Danielewski

Product take-back and reuse is an effective way to reduce the environmental footprint of products. Millions of tons of waste are disposed in landfills in the United States, electronic products being of particular concern. While constituting a small fraction of landfilled waste, electronic components account for a majority of the environmental impact. The major challenge in addressing this issue is that the components are functionally obsolete and in a state where their numbers and type are not known. Even with concerted efforts to solve this problem through better design or collection practices, a major unknown is how much actually falls through the cracks and makes it to landfills. Human sorting and identification is impractical, while automating this process has been difficult because of limitations of algorithms to match human ability to discern objects. Deep learning promises to change this. This paper discusses the use of autonomous systems that can scan unorganized heaps of products to identify and catalog components, particularly electronics. This approach can fill an important gap in our knowledge. This paper discusses the testbeds created by the authors which shows promise in accomplishing this task. The paper also discusses the repercussions of such a study and cataloging on design decision-making as well as environmental legislation.


Author(s):  
Andrew G. Dempster

This chapter examines sources of global navigation satellite system (GNSS) vulnerability, identifying the broad range of topics that comes under this heading, and cites some key references in each category area. GNSS vulnerability has been a very productive area for GNSS researchers in recent years and this chapter sets out to be a comprehensive review of the different ways that the operation of GNSS can be degraded by outside influences, from the high (system) to the low (receiver component) level.


Author(s):  
M. Unser ◽  
B.L. Trus ◽  
A.C. Steven

Since the resolution-limiting factor in electron microscopy of biological macromolecules is not instrumental, but is rather the preservation of structure, operational definitions of resolution have to be based on the mutual consistency of a set of like images. The traditional measure of resolution for crystalline specimens in terms of the extent of periodic reflections in their diffraction patterns is such a criterion. With the advent of correlation averaging techniques for lattice rectification and the analysis of non-crystalline specimens, a more general - and desirably, closely compatible - resolution criterion is needed. Two measures of resolution for correlation-averaged images have been described, namely the differential phase residual (DPR) and the Fourier ring correlation (FRC). However, the values that they give for resolution often differ substantially. Furthermore, neither method relates in a straightforward way to the long-standing resolution criterion for crystalline specimens.


1983 ◽  
Vol 26 (1) ◽  
pp. 2-9 ◽  
Author(s):  
Vincent J. Samar ◽  
Donald G. Sims

The relationship between the latency of the negative peak occurring at approximately 130 msec in the visual evoked-response (VER) and speechreading scores was investigated. A significant product-moment correlation of -.58 was obtained between the two measures, which confirmed the fundamental effect but was significantly weaker than that previously reported in the literature (-.90). Principal components analysis of the visual evoked-response waveforms revealed a previously undiscovered early VER component, statistically independent of the latency measure, which in combination with two other components predicted speechreading with a multiple correlation coefficient of S4. The potential significance of this new component for the study of individual differences in speechreading ability is discussed.


Sign in / Sign up

Export Citation Format

Share Document