scholarly journals Virtual Sensor for Calibration of Thermal Models of Machine Tools

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Alexander Dementjev ◽  
Burkhard Hensel ◽  
Klaus Kabitzsch ◽  
Bernd Kauschinger ◽  
Steffen Schroeder

Machine tools are important parts of high-complex industrial manufacturing. Thus, the end product quality strictly depends on the accuracy of these machines, but they are prone to deformation caused by their own heat. The deformation needs to be compensated in order to assure accurate production. So an adequate model of the high-dimensional thermal deformation process must be created and parameters of this model must be evaluated. Unfortunately, such parameters are often unknown and cannot be calculated a priori. Parameter identification during real experiments is not an option for these models because of its high engineering and machine time effort. The installation of additional sensors to measure these parameters directly is uneconomical. Instead, an effective calibration of thermal models can be reached by combining real and virtual measurements on a machine tool during its real operation, without additional sensors installation. In this paper, a new approach for thermal model calibration is presented. The expected results are very promising and can be recommended as an effective solution for this class of problems.

2007 ◽  
Vol 129 (3) ◽  
pp. 256-259 ◽  
Author(s):  
Mohamed-Nabil Sabry

Recent advances in compact thermal models have led to the emergence of a new concept allowing models to be created at any desired order of accuracy. Traditionally, increasing precision was attained by increasing the number of nodes. This strategy faces many problems; in particular, for the case of multiple heat sources (MCM) and∕or stacked dies, because different operating conditions will lead to different temperature and heat flux profiles that will require different node partitioning in order to be matched. In fact, classical approaches face a difficulty in selecting appropriate node size and position, as well as the inability to provide an a priori estimate of the number of nodes needed. The new concept is based on the use of a flexible profile to account for different possible uses of the model. In particular, it can deal with different patterns of heat generation encountered in MCM and stacked dies, and hence it is truly boundary conditions independent. Moreover, the new approach gives access to the tangential temperature gradient. This valuable information for designers in order to assess reliability cannot be predicted by classical compact model approaches. The concept was presented earlier for a simple rectangular 2D structure with surface heating (2004, 10th THERMINIC Conference, pp. 273–280). In this paper, the concept will be generalized to 3D parallelepiped boxes with both surface and∕or volumetric heating. The second achievement is the possibility to deal with geometries that can be decomposed into boxes.


2012 ◽  
Vol 18 (4) ◽  
pp. 331-363 ◽  
Author(s):  
Sebastian Risi ◽  
Kenneth O. Stanley

Intelligence in nature is the product of living brains, which are themselves the product of natural evolution. Although researchers in the field of neuroevolution (NE) attempt to recapitulate this process, artificial neural networks (ANNs) so far evolved through NE algorithms do not match the distinctive capabilities of biological brains. The recently introduced hypercube-based neuroevolution of augmenting topologies (HyperNEAT) approach narrowed this gap by demonstrating that the pattern of weights across the connectivity of an ANN can be generated as a function of its geometry, thereby allowing large ANNs to be evolved for high-dimensional problems. Yet the positions and number of the neurons connected through this approach must be decided a priori by the user and, unlike in living brains, cannot change during evolution. Evolvable-substrate HyperNEAT (ES-HyperNEAT), introduced in this article, addresses this limitation by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. ES-HyperNEAT is demonstrated through multi-task, maze navigation, and modular retina domains, revealing that the ANNs generated by this new approach assume natural properties such as neural topography and geometric regularity. Also importantly, ES-HyperNEAT's compact indirect encoding can be seeded to begin with a bias toward a desired class of ANN topographies, which facilitates the evolutionary search. The main conclusion is that ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover.


Author(s):  
Mohamed-Nabil Sabry

This paper is a step towards building a more complete theory of compact thermal models, in which their common structure is highlighted and hence the sources of errors inevitably present in any compact model clearly revealed. This approach is further extended in order to propose an original approach to attain any desired level of precision. Different methods already proposed by other authors can all benefit from the approach proposed here in order to raise accuracy level. Traditionally, increasing precision was attained by increasing the number of nodes. This strategy faces many problems, like the difficulty to select node size and position, as well as the inability to provide an a priori estimate of the number of nodes needed. In particular, this strategy fails in the case of multiple heat sources (MCM). The actual approach relies on a “flexible” pattern model that allows us to avoid such problems. Moreover, the new approach gives access to the tangential temperature gradient. This valuable information for designers in order to assess reliability can not be predicted by classical compact model approaches.


Author(s):  
José Ferreirós

This book presents a new approach to the epistemology of mathematics by viewing mathematics as a human activity whose knowledge is intimately linked with practice. Charting an exciting new direction in the philosophy of mathematics, the book uses the crucial idea of a continuum to provide an account of the development of mathematical knowledge that reflects the actual experience of doing math and makes sense of the perceived objectivity of mathematical results. Describing a historically oriented, agent-based philosophy of mathematics, the book shows how the mathematical tradition evolved from Euclidean geometry to the real numbers and set-theoretic structures. It argues for the need to take into account a whole web of mathematical and other practices that are learned and linked by agents, and whose interplay acts as a constraint. It demonstrates how advanced mathematics, far from being a priori, is based on hypotheses, in contrast to elementary math, which has strong cognitive and practical roots and therefore enjoys certainty. Offering a wealth of philosophical and historical insights, the book challenges us to rethink some of our most basic assumptions about mathematics, its objectivity, and its relationship to culture and science.


Mathematics ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 222
Author(s):  
Juan C. Laria ◽  
M. Carmen Aguilera-Morillo ◽  
Enrique Álvarez ◽  
Rosa E. Lillo ◽  
Sara López-Taruella ◽  
...  

Over the last decade, regularized regression methods have offered alternatives for performing multi-marker analysis and feature selection in a whole genome context. The process of defining a list of genes that will characterize an expression profile remains unclear. It currently relies upon advanced statistics and can use an agnostic point of view or include some a priori knowledge, but overfitting remains a problem. This paper introduces a methodology to deal with the variable selection and model estimation problems in the high-dimensional set-up, which can be particularly useful in the whole genome context. Results are validated using simulated data and a real dataset from a triple-negative breast cancer study.


2021 ◽  
pp. 000276422110216
Author(s):  
Kazimierz M. Slomczynski ◽  
Irina Tomescu-Dubrow ◽  
Ilona Wysmulek

This article proposes a new approach to analyze protest participation measured in surveys of uneven quality. Because single international survey projects cover only a fraction of the world’s nations in specific periods, researchers increasingly turn to ex-post harmonization of different survey data sets not a priori designed as comparable. However, very few scholars systematically examine the impact of the survey data quality on substantive results. We argue that the variation in source data, especially deviations from standards of survey documentation, data processing, and computer files—proposed by methodologists of Total Survey Error, Survey Quality Monitoring, and Fitness for Intended Use—is important for analyzing protest behavior. In particular, we apply the Survey Data Recycling framework to investigate the extent to which indicators of attending demonstrations and signing petitions in 1,184 national survey projects are associated with measures of data quality, controlling for variability in the questionnaire items. We demonstrate that the null hypothesis of no impact of measures of survey quality on indicators of protest participation must be rejected. Measures of survey documentation, data processing, and computer records, taken together, explain over 5% of the intersurvey variance in the proportions of the populations attending demonstrations or signing petitions.


2006 ◽  
Vol 6 (7) ◽  
pp. 561-582
Author(s):  
H.P. Yuen ◽  
R. Nair ◽  
E. Corndorf ◽  
G.S. Kanter ◽  
P. Kumar

Lo and Ko have developed some attacks on the cryptosystem called $\alpha \eta$}, claiming that these attacks undermine the security of $\alpha\eta$ for both direct encryption and key generation. In this paper, we show that their arguments fail in many different ways. In particular, the first attack in [1] requires channel loss or length of known-plaintext that is exponential in the key length and is unrealistic even for moderate key lengths. The second attack is a Grover search attack based on `asymptotic orthogonality' and was not analyzed quantitatively in [1]. We explain why it is not logically possible to "pull back'' an argument valid only at $n=\infty$ into a limit statement, let alone one valid for a finite number of transmissions n. We illustrate this by a `proof' using a similar asymptotic orthogonality argument that coherent-state BB84 is insecure for any value of loss. Even if a limit statement is true, this attack is a priori irrelevant as it requires an indefinitely large amount of known-plaintext, resources and processing. We also explain why the attacks in [1] on $\alpha\eta$ as a key-generation system are based on misinterpretations of [2]. Some misunderstandings in [1] regarding certain issues in cryptography and optical communications are also pointed out. Short of providing a security proof for $\alpha\eta$, we provide a description of relevant results in standard cryptography and in the design of $\alpha\eta$ to put the above issues in the proper framework and to elucidate some security features of this new approach to quantum cryptography.


Sign in / Sign up

Export Citation Format

Share Document