Predictive modeling of EUV-lithography: the role of mask, optics, and photoresist effects

Author(s):  
Andreas Erdmann ◽  
Peter Evanschitzky ◽  
Feng Shao ◽  
Tim Fühner ◽  
Gian F. Lorusso ◽  
...  
Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8315
Author(s):  
Dan Gabriel Cacuci

This work illustrates the application of the nth-order comprehensive adjoint sensitivity analysis methodology for response-coupled forward/adjoint linear systems (abbreviated as “nth-CASAM-L”) to a paradigm model that describes the transmission of particles (neutrons and/or photons) through homogenized materials, as encountered in radiation protection and shielding. The first-, second-, and third-order sensitivities of responses that depend on both the forward and adjoint particle fluxes are obtained exactly, in closed-form, underscoring the principles and methodology underlying the nth-CASAM-L. The results presented in this work underscore the fundamentally important role of the nth-CASAM-L in the quest to overcome the “curse of dimensionality” in sensitivity analysis, uncertainty quantification and predictive modeling.


2020 ◽  
Vol 10 ◽  
Author(s):  
Donata von Reibnitz ◽  
Ellen D. Yorke ◽  
Jung Hun Oh ◽  
Aditya P. Apte ◽  
Jie Yang ◽  
...  

1998 ◽  
Vol 61 (8) ◽  
pp. 1071-1074 ◽  
Author(s):  

The Risk Assessment Subcommittee of the National Advisory Committee on Microbiological Criteria in Foods has prepared a generic document on the principles of risk assessment as applied to biological agents that can cause human foodborne disease. Typical biological agents include bacteria, viruses, fungi, helminths, protozoa, algae, parasites, and the toxic products that these agents may produce. Basic principles elaborated to characterize food pathogen risks include the four broadly accepted components of risk assessment. The role of surveillance and investigational activities to link biological agents and their food sources to consumer illness is described as is the role of predictive modeling for food pathogens.


2021 ◽  
pp. 002200272098235
Author(s):  
Andreas Beger ◽  
Richard K. Morgan ◽  
Michael D. Ward

We examine the research protocols in Blair and Sambanis’ recent article on forecasting civil wars, where they argue that their theory-based model can predict civil war onsets better than several atheoretical alternatives or a model with country-structural factors. We find that there are several important mistakes and that their key finding is entirely conditional on the use of parametrically smoothed ROC curves when calculating accuracy, rather than the standard empirical ROC curves that dominate the literature. Fixing these mistakes results in a reversal of their claim that theory-based models of escalation are better at predicting onsets of civil war than other kinds of models. Their model is outperformed by several of the ad hoc, putatively non-theoretical models they devise and examine. More importantly, we argue that rather than trying to contrast the roles of theory and “atheoretical” machine learning in predictive modeling, it would be more productive to focus on ways in which predictive modeling and machine learning could be used to strengthen extant predictive work. Instead, we argue that predictive modeling and machine learning are effective tools for theory testing.


1995 ◽  
Vol 412 ◽  
Author(s):  
William M. Murphy ◽  
English C. Pearcy

AbstractA review of the natural analog study at the Akrotiri archaeological site is provided with regard to its use in support of long-term predictive modeling of chemical transport. Evidence for a plume of contaminants was collected using samples taken from the area under the location where artifacts were buried in silicic tuff for 3600 years. A transport model was developed using site characterization data, but no data for the plume. Qualitative data from the field support the model result that flux of metals from the artifacts was small. However, transient transport characteristics and the role of notable system heterogeneities and complexities were not fully represented by the model. The Akrotiri natural analog study indicates that long term releases and transport may be limited in an unsaturated repository site, but releases may be strongly affected by unknown processes or processes that are neglected in simplified models.


Author(s):  
Zhihao Wang ◽  
Katharina S. Goerlich ◽  
Hui Ai ◽  
André Aleman ◽  
Yuejia Luo ◽  
...  

AbstractAnxiety-related illnesses are highly prevalent in human society. Being able to identify neurobiological markers signaling high trait anxiety could aid the assessment of individuals with high risk for mental illness. Here, we applied connectome-based predictive modeling (CPM) to whole-brain resting-state functional connectivity (rsFC) data to predict the degree of anxiety in 76 healthy participants. Using a computational “lesion” method in CPM, we then examined the weights of the identified main brain areas as well as their connectivity. Results showed that the CPM could predict individual anxiety from whole-brain rsFC, especially from limbic areas-whole brain and prefrontal cortex-whole brain. The prediction power of the model significantly decreased from (simulated) lesions of limbic areas, lesions of the connectivity within the limbic system, and lesions of the connectivity between limbic regions and the prefrontal cortex.Although the same model also predicted depression, anxiety-specific networks could be identified independently, centered at the prefrontal cortex. These findings highlight the important role of the limbic system and the prefrontal cortex in the prediction of anxiety. Our work provides evidence for the usefulness of connectome-based modeling of rsFC in predicting individual personality differences and indicates its potential for identifying personality structures at risk of developing psychopathology.


2020 ◽  
Vol 16 (1) ◽  
pp. 453-472
Author(s):  
Rebecca A. Johnson ◽  
Tanina Rostain

The rise of big data and machine learning is a polarizing force among those studying inequality and the law. Big data and tools like predictive modeling may amplify inequalities in the law, subjecting vulnerable individuals to enhanced surveillance. But these data and tools may also serve an opposite function, shining a spotlight on inequality and subjecting powerful institutions to enhanced oversight. We begin with a typology of the role of big data in inequality and the law. The typology asks questions—Which type of individual or institutional actor holds the data? What problem is the actor trying to use the data to solve?—that help situate the use of big data within existing scholarship on law and inequality. We then highlight the dual uses of big data and computational methods—data for surveillance and data as a spotlight—in three areas of law: rental housing, child welfare, and opioid prescribing. Our review highlights asymmetries where the lack of data infrastructure to measure basic facts about inequality within the law has impeded the spotlight function.


Sign in / Sign up

Export Citation Format

Share Document