Design of multi-loop control systems for distillation columns: review of past and recent mathematical tools

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Changsoo Kim ◽  
Manas Shah ◽  
Ali M. Sahlodin

Abstract Design of a control structure in distillation columns involves selecting proper sets of manipulated and controlled variables (often including tray temperatures for inferential control of product compositions) and one-to-one pairing between the two sets. In this paper, various mathematical tools for achieving this goal are reviewed. First, traditional methods such as Singular Value Decomposition (SVD) and Relative Gain Array (RGA) that build upon a simplified steady-state or dynamic model of the column are explored. The role of optimization in systematizing the control design procedures is also investigated. Then, more recent inferential control techniques that rely on statistical methods such as Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), and other machine learning techniques such as Artificial Neural Networks (ANN) and Support Vector Machine Regression (SVMR) are discussed extensively. The discussions include newer distillation technologies with complex configurations such as dividing-wall columns. Finally, the use of process simulators in aiding the control structure design of distillation columns is surveyed.

2020 ◽  
Vol 12 (18) ◽  
pp. 3082
Author(s):  
James Kobina Mensah Biney ◽  
Luboš Borůvka ◽  
Prince Chapman Agyeman ◽  
Karel Němeček ◽  
Aleš Klement

Spectroscopy has demonstrated the ability to predict specific soil properties. Consequently, it is a promising avenue to complement the traditional methods that are costly and time-consuming. In the visible-near infrared (Vis-NIR) region, spectroscopy has been widely used for the rapid determination of organic components, especially soil organic carbon (SOC) using laboratory dry (lab-dry) measurement. However, steps such as collecting, grinding, sieving and soil drying at ambient (room) temperature and humidity for several days, which is a vital process, make the lab-dry preparation a bit slow compared to the field or laboratory wet (lab-wet) measurement. The use of soil spectra measured directly in the field or on a wet sample remains challenging due to uncontrolled soil moisture variations and other environmental conditions. However, for direct and timely prediction and mapping of soil properties, especially SOC, the field or lab-wet measurement could be an option in place of the lab-dry measurement. This study focuses on comparison of field and naturally acquired laboratory measurement of wet samples in Visible (VIS), Near-Infrared (NIR) and Vis-NIR range using several pretreatment approaches including orthogonal signal correction (OSC). The comparison was concluded with the development of validation models for SOC prediction based on partial least squares regression (PLSR) and support vector machine (SVMR). Nonetheless, for the OSC implementation, we use principal component regression (PCR) together with PLSR as SVMR is not appropriate under OSC. For SOC prediction, the field measurement was better in the VIS range with R2CV = 0.47 and RMSEPcv = 0.24, while in Vis-NIR range the lab-wet measurement was better with R2CV = 0.44 and RMSEPcv = 0.25, both using the SVMR algorithm. However, the prediction accuracy improves with the introduction of OSC on both samples. The highest prediction was obtained with the lab-wet dataset (using PLSR) in the NIR and Vis-NIR range with R2CV = 0.54/0.55 and RMSEPcv = 0.24. This result indicates that the field and, in particular, lab-wet measurements, which are not commonly used, can also be useful for SOC prediction, just as the lab-dry method, with some adjustments.


2021 ◽  
Vol 11 (13) ◽  
pp. 5895
Author(s):  
Kristina Serec ◽  
Sanja Dolanski Babić

The double-stranded B-form and A-form have long been considered the two most important native forms of DNA, each with its own distinct biological roles and hence the focus of many areas of study, from cellular functions to cancer diagnostics and drug treatment. Due to the heterogeneity and sensitivity of the secondary structure of DNA, there is a need for tools capable of a rapid and reliable quantification of DNA conformation in diverse environments. In this work, the second paper in the series that addresses conformational transitions in DNA thin films utilizing FTIR spectroscopy, we exploit popular chemometric methods: the principal component analysis (PCA), support vector machine (SVM) learning algorithm, and principal component regression (PCR), in order to quantify and categorize DNA conformation in thin films of different hydrated states. By complementing FTIR technique with multivariate statistical methods, we demonstrate the ability of our sample preparation and automated spectral analysis protocol to rapidly and efficiently determine conformation in DNA thin films based on the vibrational signatures in the 1800–935 cm−1 range. Furthermore, we assess the impact of small hydration-related changes in FTIR spectra on automated DNA conformation detection and how to avoid discrepancies by careful sampling.


2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Guzide Pekcan Ertokus

The spectrophotometric-chemometric analysis of levodopa and carbidopa that are used for Parkinson’s disease was analyzed without any prior reservation. Parkinson’s drugs in the urine sample of a healthy person (never used drugs in his life) were analyzed at the same time spectrophotometrically. The chemometric methods used were partial least squares regression (PLS) and principal component regression (PCR). PLS and PCR were successfully applied as chemometric determination of levodopa and carbidopa in human urine samples. A concentration set including binary mixtures of levodopa and carbidopa in 15 different combinations was randomly prepared in acetate buffer (pH 3.5).). UV spectrophotometry is a relatively inexpensive, reliable, and less time-consuming method. Minitab program was used for absorbance and concentration values. The normalization values for each active substance were good (r2>0.9997). Additionally, experimental data were validated statistically. The results of the analyses of the results revealed high recoveries and low standard deviations. Hence, the results encouraged us to apply the method to drug analysis. The proposed methods are highly sensitive and precise, and therefore they were implemented for the determination of the active substances in the urine sample of a healthy person in triumph.


Author(s):  
Gonzalo Vergara ◽  
Juan J. Carrasco ◽  
Jesus Martínez-Gómez ◽  
Manuel Domínguez ◽  
José A. Gámez ◽  
...  

The study of energy efficiency in buildings is an active field of research. Modeling and predicting energy related magnitudes leads to analyze electric power consumption and can achieve economical benefits. In this study, classical time series analysis and machine learning techniques, introducing clustering in some models, are applied to predict active power in buildings. The real data acquired corresponds to time, environmental and electrical data of 30 buildings belonging to the University of León (Spain). Firstly, we segmented buildings in terms of their energy consumption using principal component analysis. Afterwards, we applied state of the art machine learning methods and compare between them. Finally, we predicted daily electric power consumption profiles and compare them with actual data for different buildings. Our analysis shows that multilayer perceptrons have the lowest error followed by support vector regression and clustered extreme learning machines. We also analyze daily load profiles on weekdays and weekends for different buildings.


1996 ◽  
Vol 4 (1) ◽  
pp. 225-242 ◽  
Author(s):  
Paul Geladi ◽  
Harald Martens

Regression and calibration play an important role in analytical chemistry. All analytical instrumentation is dependent on a calibration that uses some regression model for a set of calibration samples. The ordinary least squares (OLS) method of building a multivariate linear regression (MLR) model has strict limitations. Therefore, biased or regularised regression models have been introduced. Some selected ones are ridge regression (RR), principal component regression (PCR) and partial least squares regression (PLS or PLSR). Also, artificial neural networks (ANN) based on back-propagation can be used as regression models. In order to understand regression models more is needed than just a set of statistical parameters. A deeper understanding of the underlying chemistry and physics is always equally important. For spectral data this means that a basic understanding of spectra and their errors is useful and that spectral representation should be included in judging the usefulness of the data treatment. A “constructed” spectrometric example is introduced. It consists of real spectrometric measurements in the range 408–1176 nm for 26 calibration samples and 10 test samples. The main response variable is litmus concentration, but other constituents such as bromocresolgreen and ZnO are added as interferents and also the pH is changed. The example is introduced as a tutorial. All calculations are shown in detail in Matlab. This makes it easy for the reader to follow and understand the calculations. It also makes the calculations completely traceable. The raw data are available as a file. In Part 1, the emphasis is on pretreatment of the data and on visualisation in different stages of the calculations. Part 1 ends with principal component regression calculations. Partial least squares calculations and some ANN results are presented in Part 2.


2013 ◽  
Vol 9 (3) ◽  
pp. 1153-1160 ◽  
Author(s):  
Q. Ge ◽  
Z. Hao ◽  
J. Zheng ◽  
X. Shao

Abstract. We use principal component regression and partial least squares regression to separately reconstruct a composite series of temperature variations in China, and associated uncertainties, at a decadal resolution over the past 2000 yr. The reconstruction is developed using proxy temperature data with relatively high confidence levels from five regions across China, and using a temperature series from observations by the Chinese Meteorological Administration, covering the period from 1871 to 2000. Relative to the 1851–1950 climatology, our two reconstructions show four warm intervals during AD 1–AD 200, AD 551–AD 760, AD 951–AD 1320, and after AD 1921, and four cold intervals during AD 201–AD 350, AD 441–AD 530, AD 781–AD 950, and AD 1321–AD 1920. The temperatures during AD 981–AD 1100 and AD 1201–AD 1270 are comparable to those of the Present Warm Period, but have an uncertainty of ±0.28 °C to ±0.42 °C at the 95% confidence interval. Temperature variations over China are typically in phase with those of the Northern Hemisphere (NH) after 1000, a period which covers the Medieval Climate Anomaly, the Little Ice Age, and the Present Warm Period. In contrast, a warm period in China during AD 541–AD 740 is not obviously seen in the NH.


2019 ◽  
Vol 8 (2) ◽  
pp. 3697-3705 ◽  

Forest fires have become one of the most frequently occurring disasters in recent years. The effects of forest fires have a lasting impact on the environment as it lead to deforestation and global warming, which is also one of its major cause of occurrence. Forest fires are dealt by collecting the satellite images of forest and if there is any emergency caused by the fires then the authorities are notified to mitigate its effects. By the time the authorities get to know about it, the fires would have already caused a lot of damage. Data mining and machine learning techniques can provide an efficient prevention approach where data associated with forests can be used for predicting the eventuality of forest fires. This paper uses the dataset present in the UCI machine learning repository which consists of physical factors and climatic conditions of the Montesinho park situated in Portugal. Various algorithms like Logistic regression, Support Vector Machine, Random forest, K-Nearest neighbors in addition to Bagging and Boosting predictors are used, both with and without Principal Component Analysis (PCA). Among the models in which PCA was applied, Logistic Regression gave the highest F-1 score of 68.26 and among the models where PCA was absent, Gradient boosting gave the highest score of 68.36.


Sign in / Sign up

Export Citation Format

Share Document