scholarly journals Fuzzy RAM Analysis of the Screening Unit in a Paper Industry by Utilizing Uncertain Data

2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Harish Garg ◽  
Monica Rani ◽  
S. P. Sharma

Reliability, availability, and maintainability (RAM) analysis has helped to identify the critical and sensitive subsystems in the production systems that have a major effect on system performance. But the collected or available data, reflecting the system failure and repair patterns, are vague, uncertain, and imprecise due to various practical constraints. Under these circumstances it is difficult, if not possible, to analyze the system performance up to desired degree of accuracy. For this, Artificial Bee Colony based Lambda-Tau (ABCBLT) technique has been used for computing the RAM parameters by utilizing uncertain data up to a desired degree of accuracy. Results obtained are compared with the existing Fuzzy Lambda-Tau results and we conclude that proposed results have a less range of uncertainties. Also ranking the subcomponents for improving the performance of the system has been done using RAM-Index. The approach has been illustrated through analyzing the performance of the screening unit of a paper industry.

Author(s):  
Harish Garg ◽  
Monica Rani ◽  
S.P. Sharma

The main objective of the present study is to permit the reliability analyst or system manager to analyze the failure behavior of the system in a more consistent and logical manner. As the collected or available data from various resources are uncertain and imprecise due to various practical constraints and hence the performance of the system cannot be made up to desired levels. To cope with such situations and subjective information in a consistent and logical manner, fuzzy methodology is one of the most vital and effective tool. To this effect a structural framework has been developed by the authors for analyzing and predicting the system behavior. The pulping unit of paper industry has been taken as an illustration. The failure rates and repair times for all the constituent components are obtained by solving availability-cost optimization model using particle swarm optimization and genetic algorithm. To increase the performance of the system, various reliability parameters are computed with the obtained results using a confidence interval based fuzzy lambda-tau methodology. Sensitivity as well as performance analysis of the system performance has been done for ranking the critical component of the system as per preferential order. The computed results are compared with existing fuzzy lambda-tau and traditional (crisp) methodology results.


Fuzzy Systems ◽  
2017 ◽  
pp. 1070-1109
Author(s):  
Harish Garg ◽  
Monica Rani ◽  
S.P. Sharma

The main objective of the present study is to permit the reliability analyst or system manager to analyze the failure behavior of the system in a more consistent and logical manner. As the collected or available data from various resources are uncertain and imprecise due to various practical constraints and hence the performance of the system cannot be made up to desired levels. To cope with such situations and subjective information in a consistent and logical manner, fuzzy methodology is one of the most vital and effective tool. To this effect a structural framework has been developed by the authors for analyzing and predicting the system behavior. The pulping unit of paper industry has been taken as an illustration. The failure rates and repair times for all the constituent components are obtained by solving availability-cost optimization model using particle swarm optimization and genetic algorithm. To increase the performance of the system, various reliability parameters are computed with the obtained results using a confidence interval based fuzzy lambda-tau methodology. Sensitivity as well as performance analysis of the system performance has been done for ranking the critical component of the system as per preferential order. The computed results are compared with existing fuzzy lambda-tau and traditional (crisp) methodology results.


2014 ◽  
Vol 23 (05) ◽  
pp. 1450008
Author(s):  
Harish Garg ◽  
Monica Rani ◽  
S. P. Sharma

For an industrial system reliability, availability and maintainability (RAM) analysis play an important role in any design modification for achieving its optimum performance. However, it is difficult to predict these parameters by using available and imprecise data up to a desired degree of accuracy. For this, a novel technique named as an artificial bee colony based Lambda-Tau has been presented for computing these parameters by utilizing available or collected data up to a desired degree of accuracy. In this technique expression of RAM parameters are calculated by Lambda-Tau methodology and their corresponding membership functions are computed by formulating a nonlinear programming problem. A generalized RAM-Index has been used for ranking the components of the system based on its performance for improving the system productivity. The presented approach has been investigated through a case study of washing unit of paper industry and computed results are compared with existing Lambda-Tau and evolutionary algorithm techniques.


2021 ◽  
Vol 11 (19) ◽  
pp. 9013
Author(s):  
Douha Macherki ◽  
Thierno M. L. Diallo ◽  
Jean-Yves Choley ◽  
Amir Guizani ◽  
Maher Barkallah ◽  
...  

Production systems must be able to adapt to increasingly frequent internal and external changes. Cyber-Physical Production Systems (CPPS), thanks to their potential capacity for self-reconfiguration, can cope with this need for adaptation. To implement the self-reconfiguration functionality in economical and safe conditions, CPPS must have appropriate tools and contextualized information. This information can be organized in the form of an architecture. In this paper, after the analysis of several holonic and nonholonic architectures, we propose a holonic architecture that allows for reliable and efficient reconfiguration. We call this architecture QHAR (Q-Holonic-based ARchitecture). QHAR is constructed based on the idea of a Q-holon, which has four dimensions (physical, cyber, human, and energy) and can exchange three flows (energy, data, and materials). It is a generic Holon that can represent any entity or actor of the supply chain. The QHAR is structured in three levels: centralized control level, decentralized control level, and execution level. QHAR implements the principle of an oligarchical control architecture by deploying both hierarchical and heterarchical control approaches. This ensures the overall system performance and reactivity to hazards. The proposed architecture is tested and validated on a case study.


2020 ◽  
Vol 142 (4) ◽  
Author(s):  
Abdelhamid Mraoui ◽  
Abdallah Khellaf

Abstract In this work, the design of a hydrogen production system was optimized for Algiers in Algeria. The system produces hydrogen by electrolysis using a photovoltaic (PV) generator as a source of electricity. All the elements of the system have been modeled to take into account practical constraints. The cost of producing hydrogen has been minimized by varying the total power of the photovoltaic generator. An optimal ratio between the peak power of the PV array and the nominal power of the electrolyzer was determined. Photovoltaic module technology has been varied using a large database of electrical characteristics. It was noted that PV technology does not have a very significant impact on cost. The minimum cost is around 0.44$/N m3, and the power ratio in this case is 1.45. This results in a cost reduction of around 12% compared to a unit ratio. The power ratio and cost are linearly dependent. Only a small number of technologies give a relatively low cost of about 0.35$/N m3. These generators are interesting; however, we assumed an initial cost of $2.00/Wp for all technologies. In addition, it was noted that it is possible to increase hydrogen production by 10% by increasing the power of the photovoltaic generator, the extra cost in this case will only be 0.1%.


2007 ◽  
Vol 41 ◽  
pp. 101-107 ◽  
Author(s):  
D. Pilling

SummarySome countries have introduced a requirement for genetic impact assessments prior to granting permission for the import of new exotic livestock breeds. However, the merits of such a system are not universally accepted. During February 2007 a discussion on the subject took place on FAO's Domestic Animal Diversity Network (DAD-Net) electronic forum. This paper presents a description of how the discussion developed, and a summary of the issues raised. Arguments both for and against requiring impact assessments were put forward. Those opposing such measures focused on the risks of limiting access to animal genetic resources (AnGR), and questioned the benefits of government interference. Practical constraints to implementation and enforcement were also noted. Counter arguments pointed to the potential for avoiding the loss of valuable AnGR, and stressed governments' responsibilities to intervene where necessary to promote sustainable development, to defend the interests of the poor, or to protect national heritage. The debate ranged more widely — encompassing the respective roles of local and exotic AnGR in different regions of the world and in different production systems.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Milan Mirkovic

The aim of the paper is to research and analyze the impact of the type of failure on economic risks in the bidding phase, as the most important part in the management of construction projects. The survey included the impact of risk on the process of determining unit prices from the perspective of a potential contractor. Also, the failure rate and repair rate of the 34 machines from the machine park of the company for road construction were researched. On the basis of obtained parameters and depreciation periods, the operational availability of components of construction production systems was determined. The proposed methodology for estimating impact of the availability function is a modified method of the frequency balancing. It has been tested on a concrete project from the practice in the process of harmonizing construction norms of time that preceded the final adoption of the unit prices. Differences in prices are results of the system failure of construction machinery and plants and have justified a hypothesis of obtaining more realistic costs that can occur in the projects.


2010 ◽  
Vol 10 (4) ◽  
pp. 1208-1221 ◽  
Author(s):  
Komal ◽  
S.P. Sharma ◽  
Dinesh Kumar

2000 ◽  
Vol 6 (4) ◽  
pp. 321-357 ◽  
Author(s):  
S.-Y. Chiang ◽  
C.-T. Kuo ◽  
J.-T. Lim ◽  
S. M. Meerkov

This work develops improvability theory for assembly systems. It consists of two parts. Part I includes the problem formulation and the analysis technique. Part II presents the so-called improvability indicators and a case study.Improvability theory addresses the questions of improving performance in production systems with unreliable machines. We consider both constrained and unconstrained improvability. In the constrained case, the problem consists of determining if there exists a re-distribution of resources (inventory and workforce), which leads to an increase in the system's production rate. In the unconstrained case, the problem consists of identifying a machine and a buffer, which impede the system performance in the strongest manner.The investigation of the improvability properties requires an expression for the system performance measures as functions of the machine and buffer parameters. This paper presents a method for evaluating these functions and illustrates their practical utility using a case study at an automotive components plant. Part II uses the method developed here to establish conditions of improvability and to describe additional results of the case study.


In recent years the usage of virtualized technology is increasing rapidly. This makes enhancement in the performance efficiency leads to the need of the virtualization machine. This study is developed to enhance the performance level of the docker containers in cloud computing. The work presented in the paper considers the major parameters like availability, load, location, and energy of virtual machines to increase the system performance. The major objective of the work is to analyze and distribute the load of machines equally. The ABC (Artificial or Counterfeit Bee Colony) algorithm is used. For this purpose the ABC algorithm replaces the traditional ACO approach because of its various features such as simplicity, flexibility, and robustness. The output of the proposed work is evaluated in the terms of energy consumption and job completion. The observed values corresponding to these factors prove the proficiency of the suggested ABC algorithm based technique over traditional ACO algorithm based technique.


Sign in / Sign up

Export Citation Format

Share Document