How Do We (Actually) Know Our Quality Is Improving?

Author(s):  
Justin R. Papp ◽  
William H. Forbes ◽  
Matthew A. R. Yarmuch

We have all dealt with performance metrics in the pipeline industry. How do we measure operational excellence? Are we prioritizing the right corrective actions? Are our existing metrics fair and driving the right behaviors? Will they recognize success and actually show us and our clients that we are improving? This paper describes how Enbridge Major Projects measures and knows our Quality is improving; how we prioritize, focus, and monitor Quality improvement. Using our roadmap, your organization can transform existing data streams from anecdotal to well established facts that produce actionable results and drive business objectives. To reach this outcome, Enbridge Major Projects quickly matured our Quality Culture by leveraging our strong Safety Culture and habits. On our journey to meaningful overall Quality metrics, Enbridge built a foundation through non-punitive incident reporting using incident resolution tools and a Cost of Quality model. Cost of Quality models can be designed and executed in a variety of ways. This paper will focus on applying a model specifically suited for pipeline construction and operational activities. Key topics to be addressed include: • basic common principles of an overall Cost of Quality model, • various data collection methods to suit the model’s design, and • how a Documented Defects Quality cost model allows Enbridge to identify, prioritize, and monitor Quality improvements focused on preventing recurrence and occurrence of Quality issues. Examples will be provided for common pipeline applications, including valves, pipe, and other commodities and services. This approach has enabled Enbridge Major Projects to prioritize improvement actions and meet business objectives. Applying a Cost of Quality model will enhance your operational excellence and greater adoption would provide the foundation for industry-wide Quality performance metrics that will recognize success and validate that Quality is improving in the pipeline industry.

2017 ◽  
Vol 21 (4) ◽  
pp. 351-376 ◽  
Author(s):  
Marcin Czajkowski

Purpose The purpose of this paper is to critically examine existing models for cost of quality. Having identified issues and limitations of historic models, develop and implement a novel, structured hybrid cost of quality model to identify and effectively manage cost of company’s product. Design/methodology/approach A theoretical framework is proposed based on an integration of three existing, historical cost of quality models into a structured hybrid model. Subsequently, an exploratory pilot case study in a manufacturing environment is described that illustrates the value of the model. Findings The paper manages to find how a hybrid model can help identify cost of quality more accurately than the traditional models. Thanks to the new model, the author shows how gaps between product’s theoretical and actual costs can be highlighted. This allows management to drive down cost of quality and improve business performance. Research limitations/implications The model would benefit from a company-wide implementation. The present study provides a starting point for further research in the international manufacturing sector. Practical implications The framework improves the knowledge of cost of quality by providing a new case study with full results and analysis from a UK-based manufacturing company. It provides a critical re-evaluation of available literature, including the most recent publications as far as practically possible within timescale available. The study shows the importance of comprehensive cost collection if companies are to have the right data needed to manage business excellence. Originality/value The paper presents a development of the first structured hybrid model for measuring cost of quality using the strongest points of main three approaches and addresses their limitations. It gives new arguments against allocation of some cost elements within BS 6143-2:1990, resulting in recommendations for further brainstorming of pros and cons of the suggestion.


2021 ◽  
Vol 924 (1) ◽  
pp. 012067
Author(s):  
N F Rayesa ◽  
D Y Ali

Abstract The paper compares the cost of quality models in a different business unit of selected apple juice producers. Data and information were carried out through observation and interviews with key informants and related parties. Further data were analysed using the Activity Based Costing (ABC) approach to get a quality cost model for each business unit. The primary cost analysis shows a similar proportionality between the cost of prevention, appraisal, and cost of failure among three business units. Results illustrated that the higher the production capacity, the higher quality costs that occur or are budgeted for. From the calculation of quality costs, it is known that most of the quality costs come from appraisal costs. The cased business units indicated the high appraisal cost, which does have an impact on the low number of defective or failed products.


2016 ◽  
Vol 33 (9) ◽  
pp. 1270-1285 ◽  
Author(s):  
Markus Plewa ◽  
Gernot Kaiser ◽  
Evi Hartmann

Purpose The purpose of this paper is to provide empirical evidence for competing representations of the prevention-appraisal-failure model of quality cost. Design/methodology/approach The authors conduct regression analysis on a secondary data set to reveal relationships among total cost of quality, its components and overall quality performance. Findings Total cost of quality and its failure cost component are significantly lower at higher levels of quality, while the prevention and appraisal cost components are not observed to be significantly higher at higher levels of quality. The authors propose a modification to the modern representation of the prevention-appraisal-failure model. Practical implications In manufacturing, ever higher levels of quality are associated with significantly lower quality cost. Originality/value Using a large, unique data set for secondary analysis, combined with employing a high-level measure for overall quality performance, the authors provide evidence for the aggregate explanatory power of prevalent representations of the prevention-appraisal-failure cost of quality model.


Author(s):  
Robert B. Handfield ◽  
Anand Nair

Counterfeiting is a problem many companies do not want to acknowledge or talk about. However, ignoring the problem is likely to have the effect of encouraging counterfeiters to go unchecked. A multilayered strategy that adopts several approaches and engages the entire organization is needed to address the counterfeiting problem. This chapter provides a set of recommendations to address the issue of counterfeiting. Initially, the size of the problem must be estimated and the return on investment approximated. This can help define the need for a team of experts to work in this area, leading to a set of performance metrics that are aligned with business objectives and outcomes. Next, the key focal product segments should be targeted and a system for identifying products through product trademark registration with customs authorities should be completed. In the end, combating counterfeiting is not a supply chain problem, it is not a legal problem, nor is it a packaging and covert marking problem. It is a global problem—one that impacts all organizations, large and small. All business functions need to be part of the discussion, not just a single brand security function. Failure to approach counterfeiting in this manner will simply allow the problem to continue to grow.


2021 ◽  
Vol 29 (3) ◽  
Author(s):  
Péter Orosz ◽  
Tamás Tóthfalusi

AbstractThe increasing number of Voice over LTE deployments and IP-based voice services raise the demand for their user-centric service quality monitoring. This domain’s leading challenge is measuring user experience quality reliably without performing subjective assessments or applying the standard full-reference objective models. While the former is time- and resource-consuming and primarily executed ad-hoc, the latter depends upon a reference source and processes the voice payload that may offend user privacy. This paper presents a packet-level measurement method (introducing a novel metric set) to objectively assess network and service quality online. It is accomplished without inspecting the voice payload and needing the reference voice sample. The proposal has three contributions: (i) our method focuses on the timeliness of the media traffic. It introduces new performance metrics that describe and measure the service’s time-domain behavior from the voice application viewpoint. (ii) Based on the proposed metrics, we also present a no-reference Quality of Experience (QoE) estimation model. (iii) Additionally, we propose a new method to identify the pace of the speech (slow or dynamic) as long as voice activity detection (VAD) is present between the endpoints. This identification supports the introduced quality model to estimate the perceived quality with higher accuracy. The performance of the proposed model is validated against a full-reference voice quality estimation model called AQuA, using real VoIP traffic (originated in assorted voice samples) in controlled transmission scenarios.


Author(s):  
Lauren-Brooke Eisen ◽  
Miriam Aroni Krinsky

Local prosecutors are responsible for 95 percent of criminal cases in the United States—their charging decisions holding enormous influence over the number of people incarcerated and the length of sentences served. Performance metrics are a tool that can align the vision of elected prosecutors with the tangible actions of their offices’ line attorneys. The right metrics can provide clarity to individual line attorneys around the mission of the office and the goals of their job. Historically, however, prosecutor offices have relied on evaluation metrics that incentivize individual attorneys to prioritize more punitive responses and volume-driven activity—such as tracking the number of cases processed, indictments, guilty pleas, convictions, and sentence lengths. Under these past approaches, funding, budgeting, and promotional decisions are frequently linked to regressive measures that fail to account for just results. As more Americans have embraced the need to end mass incarceration, a new wave of reform-minded district attorneys have won elections. To ensure they are accountable to the voters who elected them into office and achieve the changes they championed, they must align measures of success with new priorities for their offices. New performance metrics predicated on the goals of reducing incarceration and enhancing fairness can shrink prison and jail populations, while improving public trust and promoting healthier and safer communities. The authors propose a new set of metrics for elected prosecutors to consider in designing performance evaluations, both for their offices and for individual attorneys. The authors also suggest that for these new performance measures to effectively drive decarceration practices, they must be coupled with careful, thoughtful implementation and critical data-management infrastructure.


2021 ◽  
Vol 1 (63) ◽  
pp. 38-43
Author(s):  
S. Koshel ◽  
◽  
G. Koshel ◽  

To create reliable and highly efficient, en- ergy-saving machines for light industry, it is necessary to study the dynamic processes of movement of the links of the mechanisms of which they are composed. Especially such studies should be given attention for machines, the mechanisms of which have a cyclic nature of action. During the execution of a technological operation in light industry machines with a periodic cyclic movement of the working bodies, an uneven movement of the main shaft occurs. This is caused by the movement of the links of the mechanism with certain accelerations and taking into account the periodic nature of the action of technological loads, which vary in magnitude and direction. The uneven nature of the movement leads to the appearance of additional loads in the kinematic pairs of mechanisms, mechanical vibrations in the mechanical trans- mission systems of motion, leads to the appearance of vibrations and violations in the positioning of the working bodies, affects the techno- logical process of the equipment. Uniform and stable tension of the threads is the key to a high-quality performance of the loop formation process in knitwear. Additional dynamic loads affect the technological tension of textile threads during equipment operation. These loads are caused by the accelerated movement of the links of the mechanism, which is especially important for technological equipment with the pres- ence of a reverse working stroke of its links. In such mechanisms, the values of the angular acceleration of the links and the linear accelera- tion of their individual points can acquire critically permissible values. It is possible to ensure the movement of the working bodies of the machine according to the law for which the tension of the threads will be optimally necessary. To do this, you need to choose the right type of mechanism that sets them in motion. The aim of the work is to conduct a structural-kinematic research of the mechanism of the reversible movement of the needle drum of a knitting machine, which will justify the selection of the required type of mechanism for such equipment. The confirmation of the improvement of the conditions for the formation of loops when knitting on a knitting machine with a reversible needle drum movement, made on the basis of a rocker mechanism, has been obtained.


Sign in / Sign up

Export Citation Format

Share Document