Parametric Variation or Defects?: Statistical Post-Processing Analysis of Wafer-Sort Data

Author(s):  
W. Robert Daasch

Abstract The subject of this paper is statistical post-processing of wafer-sort test data. Statistical post-processing (SPP) has successfully separated many of the effects of defects from normal wafer-to-wafer variation. The data-driven method is used with parametric data such as IDDQ, minVDD, and others. The neighboring die are used to form an estimate of a die’s expected value. The resulting SPP residual has smaller variance than the original measurement variance and filters most of the spatial patterns that obscure data outliers from normal variation. The method is applicable to a wide variety of process parameter variation issues of concern to both test and FA communities.

Author(s):  
Solange Oliveira Rezende ◽  
Edson Augusto Melanda ◽  
Magaly Lika Fujimoto ◽  
Roberta Akemi Sinoara ◽  
Veronica Oliveira de Carvalho

Association rule mining is a data mining task that is applied in several real problems. However, due to the huge number of association rules that can be generated, the knowledge post-processing phase becomes very complex and challenging. There are several evaluation measures that can be used in this phase to assist users in finding interesting rules. These measures, which can be divided into data-driven (or objective measures) and user-driven (or subjective measures), are first discussed and then analyzed for their pros and cons. A new methodology that combines them, aiming to use the advantages of each kind of measure and to make user’s participation easier, is presented. In this way, data-driven measures can be used to select some potentially interesting rules for the user’s evaluation. These rules and the knowledge obtained during the evaluation can be used to calculate user-driven measures, which are used to aid the user in identifying interesting rules. In order to identify interesting rules that use our methodology, an approach is described, as well as an exploratory environment and a case study to show that the proposed methodology is feasible. Interesting results were obtained. In the end of the chapter tendencies related to the subject are discussed.


2020 ◽  
Author(s):  
Thomas H Costello ◽  
Shauna Bowes ◽  
Sean T. Stevens ◽  
Irwin Waldman ◽  
Scott O. Lilienfeld

Authoritarianism has been the subject of scientific inquiry for nearly a century, yet the vast majority of authoritarianism research has focused on right-wing authoritarianism. In the present studies, we investigate the nature, structure, and nomological network of left-wing authoritarianism (LWA), a construct famously known as “the Loch Ness Monster” of political psychology. We iteratively construct a measure and data-driven conceptualization of LWA across six samples (N = 7,258) and conduct quantitative tests of LWA’s relations with over 50 authoritarianism-related variables. We find that left- and right-wing authoritarianism reflect a shared constellation of personality traits, cognitive features, beliefs, and values that might be considered the “heart” of authoritarianism. Our results also indicate that LWA powerfully predicts several critical, real-world outcomes, including participation in political violence. We conclude that a movement away from exclusively right-wing conceptualizations of authoritarianism may be required to illuminate authoritarianism’s central features, conceptual breadth, and psychological appeal.


2021 ◽  
Author(s):  
Ris Tanti

The aims of this study was to improve output comprehension concept vast unit conversion using an cooperative learning model with type Student Teams Achievement Divisions (STAD). The kind of this study was action class research (ACR), teacher as a learning executor then student as a learning observer. This study used 2 cycle action class. Every cycle consisted of four stage that is planning, action implementation, observation and evaluation. This study was held on even semester, on February. The subject of this study was student at VI Mina class SD Islam Al Azhar 17 Bintaro consist of 35 students. The data obtained from observation, interviewed and written test. Data analysis used descriptive and presented in a table. The study indicated increase comprehension concept vast unit conversion of student. The first cycle showed 63% student’s score above Minimum Completeness Criteria (MCC), then on second cycle the student’s score increased up to 100% above MCC. Accordingly, cooperative learning model Student Teams Achievement Division (STAD) application can improve comprehension concept vast unit in math story telling question.


Author(s):  
Julia Chen ◽  
Dennis Foung

This chapter explores the possibility of adopting a data-driven approach to connecting teacher-made assessments with course learning outcomes. The authors begin by describing several key concepts, such as outcome-based education, curriculum alignment, and teacher-made assessments. Then, the context of the research site and the subject in question are described and the use of structural equation modeling (SEM) in this curriculum alignment study is explained. After that, the results of these SEM analyses are presented, and the various models derived from the analyses are discussed. In particular, the authors highlight how a data-driven curriculum model can benefit from input by curriculum leaders and how SEM provides insights into course development and enhancement. The chapter concludes with recommendations for curriculum leaders and front-line teachers to improve the quality of teacher-made assessments.


2019 ◽  
Vol 11 (9) ◽  
pp. 2717
Author(s):  
Fátima L. Vieira ◽  
Paulo A. Vieira ◽  
Denis A. Coelho

This paper proposes a data-driven approach to develop a taxonomy in a data structure on list for triple bottom line (TBL) metrics. The approach is built from the authors reflection on the subject and review of the literature about TBL. The envisaged taxonomy framework grid to be developed through this approach will enable existing metrics to be classified, grouped, and standardized, as well as detect the need for further metrics development in uncovered domains and applications. The approach reported aims at developing a taxonomy structure that can be seen as a bi-dimensional table focusing on feature interrogations and characterizing answers, which will be the basis on which the taxonomy can then be developed. The interrogations column is designed as the stack of the TBL metrics features: What type of metric is it (qualitative, quantitative, or hybrid)? What is the level of complexity of the problems where it is used? What standards does it follow? How is the measurement made, and what are the techniques that it uses? In what kinds of problems, subjects, and domains is the metric used? How is the metric validated? What is the method used in its calculation? The column of characterizing answers results from a categorization of the range of types of answers to the feature interrogations. The approach reported in this paper is based on a screening tool that searches and analyzes information both within abstracts and full-text journal papers. The vision for this future taxonomy is that it will enable locating for any specific context, discern what TBL metrics are used in that context or similar contexts, or whether there is a lack of developed metrics. This meta knowledge will enable a conscious decision to be made between creating a new metric or using one of those that already exists. In this latter case, it would also make it possible to choose, among several metrics, the one that is most appropriate to the context at hand. In addition, this future framework will ease new future literature revisions, when these are viewed as updates of this envisaged taxonomy. This would allow creating a dynamic taxonomy for TBL metrics. This paper presents a computational approach to develop such taxonomy, and reports on the initial steps taken in that direction, by creating a taxonomy framework grid with a computational approach.


Author(s):  
Lorenzo Magnani

This paper introduces an epistemological model of scientific reasoning which can be described in terms of abduction, deduction and induction. The aim is to emphasize the significance of abduction in order to illustrate the problem-solving process and to propose a unified epistemological model of scientific discovery. The model first describes the different meanings of the word abduction (creative, selective, to the best explanation, visual) in order to clarify their significance for epistemology and artificial intelligence. In different theoretical changes in theoretical systems we witness different kinds of discovery processes operating. Discovery methods are "data-driven," "explanation-driven" (abductive), and "coherence-driven" (formed to overwhelm contradictions). Sometimes there is a mixture of such methods: for example, an hypothesis devoted to overcome a contradiction is found by abduction. Contradiction, far from damaging a system, help to indicate regions in which it can be changed and improved. I will also consider a kind of "weak" hypothesis that is hard to negate and the ways for making it easy. In these cases the subject can "rationally" decide to withdraw his or her hypotheses even in contexts where it is "impossible" to find "explicit" contradictions and anomalies. Here, the use of negation as failure (an interesting technique for negating hypotheses and accessing new ones suggested by artificial intelligence and cognitive scientists) is illuminating


Solar Energy ◽  
2020 ◽  
Vol 208 ◽  
pp. 612-622
Author(s):  
Gokhan Mert Yagli ◽  
Dazhi Yang ◽  
Dipti Srinivasan

ReCALL ◽  
1997 ◽  
Vol 9 (2) ◽  
pp. 8-16 ◽  
Author(s):  
Tony McEnery ◽  
Andrew Wilson ◽  
Paul Barker

In this paper we consider how corpora may be of use in the teaching of grammar of the pre-tertiary level. Corpora are becoming well established in teaching in Universities. Corpora also have a role to play in secondary education, in that they can help decide how and what to teach, as well as changing the way in which puplis learn and providing the possibility of open-ended machine-aided tuition. Corpora also seem to provide what UK goverment sponsored reports on teaching grammar have called for – a data-driven approach to the subject.


2021 ◽  
Vol 13 (11) ◽  
pp. 168781402110610
Author(s):  
Shahin Khoddam ◽  
Soheil Solhjoo ◽  
Peter D Hodgson

Materials engineering and science rely heavily on the indirect measurement of plastic stress and strain by post-processing of mechanical test data, including tension, torsion, and compression test. There is no consensus among researchers regarding the best test or the post-processing theory nor do adequate standards exist on the characterization methods. The tests are typically performed as customized tests, discrepancies exist in the flow curves obtained by different methods and the chosen mechanical test. More critically, the curves are dominantly treated (perceived) as a set of measured data rather than calculated values. The plasticity-based calculated flow curves and their gradients are, in turn, the basis for several second-tier indirect measurements, such as stacking fault energy and recrystallization. Such measurements are quite prone to errors due to oversimplified post-processing of the tests’ data and can only be experimentally verified in a qualitative or in an average fashion. Therefore, their findings are highly restricted by the limitations of each test, data type and post-processing method, and should be used carefully. This review examines some of the most commonly used data conversion methods and some recent developments in the field followed by recommendations. It will highlight the need to develop test rigs that can provide new data types and to provide advanced post-processing techniques for the indirect measurement.


Sign in / Sign up

Export Citation Format

Share Document