scholarly journals A Modular Framework for the Modelling and Optimization of Advanced Chromatographic Processes

Processes ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 65 ◽  
Author(s):  
Johannes Schmölder ◽  
Malte Kaspereit

A framework is introduced for the systematic development of preparative chromatographic processes. It is intended for the optimal design of conventional and advanced concepts that exploit strategies, such as recycling, side streams, bypasses, using single or multiple columns, and combinations thereof. The Python-based platform simplifies the implementation of new processes and design problems by decoupling design tasks into individual modules for modelling, simulation, assertion of cyclic stationarity, product fractionation, and optimization. Interfaces to external libraries provide flexibility regarding the choice of column model, solver, and optimizer. The current implementation, named CADET-Process, uses the software CADET for solving the model equations. The structure of the framework is discussed and its application for optimal design of existing and identification of new chromatographic operating concepts is demonstrated by case studies.

Author(s):  
Danilo Quagliotti

Abstract The assessment of the systematic behavior based on frequentist statistics was analyzed in the context of micro/nano metrology. The proposed method is in agreement with the well-known GUM recommendations. The investigation assessed three different case studies with definition of model equations and establishment of the traceability. The systematic behavior was modeled in Sq roughness parameters and step height measurements obtained from different types of optical microscopes, and in comparison with a calibrated contact instrument. The sequence of case studies demonstrated the applicability of the method to micrographs when their elements are averaged. Moreover, a number of influence factors, which are typical causes of inaccuracy at the micro and nano length scales, were analyzed in relation to the correction of the systematic behavior, viz. the amount of repeated measurements, the time sequence of the acquired micrographs and the instrument-operator chain. The possibility of applying the method individually to the elements of the micrographs was instead proven not convenient and too onerous for the industry. Eventually, the method was also examined against the framework of the metrological characteristics defined in ISO 25178-600 with hints on possible future developments.


Author(s):  
Ryohei Yokoyama ◽  
Yuji Shinano ◽  
Yuki Wakayama ◽  
Tetsuya Wakui

To attain the highest performance of energy supply systems, it is necessary to rationally determine types, capacities, and numbers of equipment in consideration of their operational strategies corresponding to seasonal and hourly variations in energy demands. Mixed-integer linear programming (MILP) approaches have been applied widely to such optimal design problems. The authors have proposed a MILP method utilizing the hierarchical relationship between design and operation variables to solve the optimal design problems of energy supply systems efficiently. In addition, some strategies to enhance the computation efficiency have been adopted: bounding procedures at both the levels and ordering of the optimal operation problems at the lower level. In this paper, as an additional strategy to enhance the computation efficiency, parallel computing is adopted to solve multiple optimal operation problems in parallel at the lower level. In addition, the effectiveness of each and combinations of the strategies adopted previously and newly is investigated. This hierarchical optimization method is applied to an optimal design of a gas turbine cogeneration plant, and its validity and effectiveness are clarified through some case studies.


2012 ◽  
Vol 93 (9) ◽  
pp. 1389-1400 ◽  
Author(s):  
R. A. J. Neggers ◽  
A. P. Siebesma ◽  
T. Heus

Uncertainties in numerical predictions of weather and climate are often linked to the representation of unresolved processes that act relatively quickly compared to the resolved general circulation. These processes include turbulence, convection, clouds, and radiation. Single-column model (SCM) simulation of idealized cases and the subsequent evaluation against large-eddy simulation (LES) results has become an often used and relied on method to obtain insight at process level into the behavior of such parameterization schemes; benefits of SCM simulation are the enhanced model transparency and the high computational efficiency. Although this approach has achieved demonstrable success, some shortcomings have been identified; among these, i) the statistical significance and relevance of single idealized case studies might be questioned and ii) the use of observational datasets has been relatively limited. A recently initiated project named the Royal Netherlands Meteorological Institute (KNMI) Parameterization Testbed (KPT) is part of a general move toward a more statistically significant process-level evaluation, with the purpose of optimizing the identification of problems in general circulation models that are related to parameterization schemes. The main strategy of KPT is to apply continuous long-term SCM simulation and LES at various permanent meteorological sites, in combination with comprehensive evaluation against observations at multiple time scales. We argue that this strategy enables the reproduction of typical long-term mean behavior of fast physics in large-scale models, but it still preserves the benefits of single-case studies (such as model transparency). This facilitates the tracing and understanding of errors in parameterization schemes, which should eventually lead to a reduction of related uncertainties in numerical predictions of weather and climate.


Sign in / Sign up

Export Citation Format

Share Document