Efficient Latch Optimization Using Exclusive Sets

Author(s):  
E.M. Sentovich ◽  
H. Toma ◽  
G. Berry
Keyword(s):  
1999 ◽  
Vol 121 (2) ◽  
pp. 295-299 ◽  
Author(s):  
C. K. H. Koh ◽  
J. Shi ◽  
W. J. Williams ◽  
J. Ni

The sheet metal drawing operation is a complex manufacturing process involving more than forty process variables. The intricate interaction among these variables affect the forming tonnage which is measured by strain gages mounted on the press. A fault is said to occur when any of these process variables deviate beyond their specified limits. Current detection schemes based on thresholding do not fully exploit the information in the tonnage signals for the detection and isolation of multiple fault condition. It is thus an excellent case study for demonstrating the implementation of the detection methodology presented in Part 1. By partitioning the tonnage signature into disjoint segments, mutually exclusive sets of Haar coefficients can be used to isolate faults in each stage of the process.


FEBS Letters ◽  
1988 ◽  
Vol 242 (1) ◽  
pp. 144-148 ◽  
Author(s):  
Evita Mohr ◽  
Ulrich Bahnsen ◽  
Christiane Kiessling ◽  
Dietmar Richter

2017 ◽  
Vol 34 (8) ◽  
pp. 1152-1166
Author(s):  
Carla A. Vivacqua ◽  
Linda Lee Ho ◽  
André L.S. Pinho

Purpose The purpose of this paper is to show how to properly use the method of replacement to construct mixed two- and four-level minimum setup split-plot type designs to accommodate the presence of hard-to-assemble parts. Design/methodology/approach Split-plot type designs are economical approaches in industrial experimentation. These types of designs are particularly useful for situations involving interchangeable parts with different degrees of assembly difficulties. Methodologies for designing and analyzing such experiments have advanced lately, especially for two-level designs. Practical needs may require the inclusion of factors with more than two levels. Here, the authors consider an experiment to improve the performance of a Baja car including two- and four-level factors. Findings The authors find that the direct use of the existing minimum setup maximum aberration (MSMA) catalogs for two-level split-plot type designs may lead to inappropriate designs (e.g. low resolution). The existing method of replacement for searching exclusive sets of the form (α, β, αβ) available in the literature is suitable for completely randomized designs, but it may not provide efficient plans for designs with restricted randomization. Originality/value The authors provide a general framework for the practitioners and have extended the algorithm to find out the number of generators and the number of base factor at each stratum, which guide the selection of mixed two-level and four-level MSMA split-plot type designs.


2017 ◽  
Vol 8 (3) ◽  
pp. 27
Author(s):  
Frank Heilig ◽  
Edward J. Lusk

The best practices execution of the audit is conditioned by the facility with which Decision Support Systems [DSS] can be created using simple Excel™ programming tools and functionalities. Such DSS can aid in the exclusive binary triage of the many of the client’s accounts each of which typically has tens of thousands of items into: {Accounts that may warrant Extended Procedures Testing [EPT]} or {Accounts that may not warrant EPT}. We use the Newcomb-Benford first-digit-profile as a triage platform to screen client accounts into the above mentioned exclusive sets. We call this DSS: The Newcomb-Benford Robust Screening:DSS [NBRS:DSS]. We report on the details of its development & vetting, and illustrate its functionalities using one of the historical Benford Datasets. The NBRS:DSS employs four account screening platforms each of which has been reported in the literature. The NBRS:DSS is available from the authors free as a download without restrictions to its use.


1999 ◽  
Vol 121 (2) ◽  
pp. 290-294 ◽  
Author(s):  
C. K. H. Koh ◽  
J. Shi ◽  
W. J. Williams ◽  
J. Ni

Most manufacturing processes involve several process variables which interact with one another to produce a resultant action on the part. A fault is said to occur when any of these process variables deviate beyond their specified limits. An alarm is triggered when this happens. Low cost and less sophisticated detection schemes based on threshold bounds on the original measurements (without feature extraction) often suffer from high false alarm and missed detection rates when the process measurements are not properly conditioned. They are unable to detect frequency or phase shifted fault signals whose amplitudes remain within specifications. They also provide little or no information about the multiplicity (number of faults in the same process cycle) or location (the portion of the cycle where the fault was detected) of the fault condition. A method of overcoming these limitations is proposed in this paper. The Haar transform is used to generate sets of detection signals from the original measurements of process monitoring signals. By partitioning these signals into disjoint segments, mutually exclusive sets of Haar coefficients can be used to locate faults at different phases of the process. The lack of a priori information on fault condition is overcomed by using the Neyman-Pearson criteria for the uniformly most powerful form (UMP) of the likelihood ratio test (LRT).


2016 ◽  
Author(s):  
G. Sampath

AbstractPeptide sequences from a proteome can be partitioned into N mutually exclusive sets and used to identify their parent proteins in a sequence database. This is illustrated with the human proteome (http://www.uniprot.org; id UP000005640), which is partitioned into eight subsets KZ*R, KZ*D, KZ*E, KZ*, Z*R, Z*D, Z*E, and Z*, where Z ∈ {A, N, C, Q, G, H, I, L, M, F, P, S, T, W, Y, V} and Z* ≡ 0 or more occurrences of Z. If the full peptide sequence is known then over 98% of the proteins in the proteome can be identified from such sequences. The rate exceeds 78% if the positions of four internal residue types are known. When the standard set of 20 amino acids is replaced with an alphabet of size four based on residue volume the identification rate exceeds 96%. In an information-theoretic sense this last result suggests that protein sequences effectively carry nearly the same amount of information as the exon sequences in the genome that code for them using an alphabet of size four. An appendix discusses possible in vitro methods to create peptide partitions and potential ways to sequence partitioned peptides.


2020 ◽  
Vol 19 (2) ◽  
pp. 84-93
Author(s):  
Joseph Bell ◽  
Anna den Boer ◽  
Kimela Shah

Many legal regimes allow for the award of aggregate damages in collective or class claims. That is to say, an award may be made reflecting the losses of the class as a whole, with little or no information as to the losses suffered by individual class members. Economists are able to calculate damages at a class level without complete individual data by applying two different but not mutually exclusive sets of methodologies, which we refer to as ‘top-down’ and ‘sample-based’ approaches. This article discusses some of the advantages and pitfalls that may arise in estimating aggregate damages under each approach, and illustrates some circumstances in which the process of aggregation may lead to upward or downward bias in the estimate of total loss. We also compare the relative merits of each approach, and consider some of the practical steps by which such biases may be avoided.


Sign in / Sign up

Export Citation Format

Share Document