scholarly journals Systematic and Model-Assisted Process Design for the Extraction and Purification of Artemisinin from Artemisia annua L.—Part II: Model-Based Design of Agitated and Packed Columns for Multistage Extraction and Scrubbing

Processes ◽  
2018 ◽  
Vol 6 (10) ◽  
pp. 179 ◽  
Author(s):  
Axel Schmidt ◽  
Maximilian Sixt ◽  
Maximilian Huter ◽  
Fabian Mestmäcker ◽  
Jochen Strube

Liquid-liquid extraction (LLE) is an established unit operation in the manufacturing process of many products. However, development and integration of multistage LLE for new products and separation routes is often hindered and is probably more cost intensive due to a lack of robust development strategies and reliable process models. Even today, extraction columns are designed based on pilot plant experiments. For dimensioning, knowledge of phase equilibrium, hydrodynamics and mass transport kinetics are necessary. Usually, those must be determined experimentally for scale-up, at least in scales of DN50-150 (nominal diameter). This experiment-based methodology is time consuming and it requires large amounts of feedstock, especially in the early phase of the project. In this study the development for the integration of LLE in a new manufacturing process for artemisinin as an anti-malaria drug is presented. For this, a combination of miniaturized laboratory and mini-plant experiments supported by mathematical modelling is used. System data on extraction and washing distributions were determined by means of shaking tests and implemented as a multi-stage extraction in a process model. After the determination of model parameters for mass transfer and plant hydrodynamics in a droplet measurement apparatus, a distributed plug-flow model is used for scale-up studies. Operating points are validated in a mini-plant system. The mini-plant runs are executed in a Kühni-column (DN26) for extraction and a packed extraction column (DN26) for the separation of side components with a throughput of up to 3.6 L/h, yield of up to 100%, and purity of 41% in the feed mixture to 91% after washing.

Author(s):  
Paul Witherell ◽  
Shaw Feng ◽  
Timothy W. Simpson ◽  
David B. Saint John ◽  
Pan Michaleris ◽  
...  

In this paper, we advocate for a more harmonized approach to model development for additive manufacturing (AM) processes, through classification and metamodeling that will support AM process model composability, reusability, and integration. We review several types of AM process models and use the direct metal powder bed fusion AM process to provide illustrative examples of the proposed classification and metamodel approach. We describe how a coordinated approach can be used to extend modeling capabilities by promoting model composability. As part of future work, a framework is envisioned to realize a more coherent strategy for model development and deployment.


Author(s):  
Kevin Li ◽  
William Z. Bernstein

Manufacturing taxonomies and accompanying metadata of manufacturing processes have been catalogued in both reference books and databases on-line. However, such information remains in a form that is uninformative to the various stages of the product life cycle, including the design phase and manufacturing-related activities. This challenge lies in the varying nature in how the data is captured and represented. In this paper, we explore measures for comparing manufacturing data with the goal of developing a capability-based similarity metric for manufacturing processes. To judge the effectiveness of these metrics, we apply permutations of them to 26 manufacturing process models, such as blow molding, die casting, and milling, that were created based on the ASTM E3012-16 standard. Furthermore, we provide directions towards the development of an aggregate similarity metric considering multiple capability features. In the future, this work will contribute to a broad vision of a manufacturing process model repository by helping ease decision-making for engineering design and planning.


Processes ◽  
2018 ◽  
Vol 6 (9) ◽  
pp. 161 ◽  
Author(s):  
Maximilian Sixt ◽  
Axel Schmidt ◽  
Fabian Mestmäcker ◽  
Maximilian Huter ◽  
Lukas Uhlenbrock ◽  
...  

The article summarizes a systematic process design for the extraction and purification of artemisinin from annual mugwort (Artemisia annua L.). Artemisinin serves as an anti-malaria drug, therefore, resource-efficient and economic processes for its production are needed. The process design was based on lab-scale experiments and afterwards piloted on miniplant-scale at the institute. In this part of the article, a detailed economic feasibility studies including a reference process as a benchmark the lab-scale process and the pilot-scale process is given. Relevant differences between the different scales are discussed. The details of the respective unit operations (solid-liquid extraction, liquid-liquid extraction, chromatography and crystallization) are presented in dedicated articles. The study showed that even miniaturized lab-scale experiments are able to deliver data detailed enough for scale-up calculations on a theoretical basis. To our knowledge, a comparable systematic process design and piloting was never performed by academia before.


2000 ◽  
Vol 41 (10-11) ◽  
pp. 85-91 ◽  
Author(s):  
D. Van Gauwbergen ◽  
J. Baeyens

The modelling of the reverse osmosis process is needed to fully evaluate its potential and facilitate scale up. The definition of the flow regime in the concentrate channel is of paramount importance. The present paper describes our experimental investigations on RTD and relates the RTD response curves to the regime of flow in the concentrate channel. Results demonstrate (i) that dead zones are present; (ii) that both a Plug Flow with Dispersion (PFD) and Probabilistic Time Delay (PTD) model can be used to characterise the flow; and (iii) that PFD- and PTD-model parameters assume nearly constant values for a given geometry which simplifies the prediction of the RTD for any desired flow rate.


2019 ◽  
Vol 12 (1) ◽  
Author(s):  
James J. Lischeske ◽  
Jonathan J. Stickel

Abstract Background Enzymatic hydrolysis continues to have a significant projected production cost for the biological conversion of biomass to fuels and chemicals, motivating research into improved enzyme and reactor technologies in order to reduce enzyme usage and equipment costs. However, technology development is stymied by a lack of accurate and computationally accessible enzymatic-hydrolysis reaction models. Enzymatic deconstruction of cellulosic materials is an exceedingly complex physico-chemical process. Models which elucidate specific mechanisms of deconstruction are often too computationally intensive to be accessible in process or multi-physics simulations, and empirical models are often too inflexible to be effectively applied outside of their batch contexts. In this paper, we employ a phenomenological modeling approach to represent rate slowdown due to substrate structure (implemented as two substrate phases) and feedback inhibition, and apply the model to a continuous reactor system. Results A phenomenological model was developed in order to predict glucose and solids concentrations in batch and continuous enzymatic-hydrolysis reactors from which liquor is continuously removed by ultrafiltration. A series of batch experiments were performed, varying initial conditions (solids, enzyme, and sugar concentrations), and best-fit model parameters were determined using constrained nonlinear least-squares methods. The model achieved a good fit for overall sugar yield and insoluble solids concentration, as well as for the reduced rate of sugar production over time. Additionally, without refitting model coefficients, good quantitative agreement was observed between results from continuous enzymatic-hydrolysis experiments and model predictions. Finally, the sensitivity of the model to its parameters is explored and discussed. Conclusions Although the phenomena represented by the model correspond to behaviors that emerge from clusters of mechanisms, and hence a set of model coefficients are unique to the substrate and the enzyme system, the model is efficient to solve and may be applied to novel reactor schema and implemented in computational fluid dynamics (CFD) simulations. Hence, this modeling approach finds the right balance between model complexity and computational efficiency. These capabilities have broad application to reactor design, scale-up, and process optimization.


2019 ◽  
Vol 142 (1) ◽  
Author(s):  
Shilan Jin ◽  
Ashif Iquebal ◽  
Satish Bukkapatnam ◽  
Andrew Gaynor ◽  
Yu Ding

Abstract Polishing of additively manufactured products is a multi-stage process, and a different combination of polishing pad and process parameters is employed at each stage. Pad change decisions and endpoint determination currently rely on practitioners’ experience and subjective visual inspection of surface quality. An automated and objective decision process is more desired for delivering consistency and reducing variability. Toward that objective, a model-guided decision-making scheme is developed in this article for the polishing process of a titanium alloy workpiece. The model used is a series of Gaussian process models, each established for a polishing stage at which surface data are gathered. The series of Gaussian process models appear capable of capturing surface changes and variation over the polishing process, resulting in a decision protocol informed by the correlation characteristics over the sample surface. It is found that low correlations reveal the existence of extreme roughness that may be deemed surface defects. Making judicious use of the change pattern in surface correlation provides insights enabling timely actions. Physical polishing of titanium alloy samples and a simulation of this process are used together to demonstrate the merit of the proposed method.


2011 ◽  
Vol 23 (11) ◽  
pp. 2731-2745 ◽  
Author(s):  
Sridevi V. Sarma ◽  
David P. Nguyen ◽  
Gabriela Czanner ◽  
Sylvia Wirth ◽  
Matthew A. Wilson ◽  
...  

Characterizing neural spiking activity as a function of intrinsic and extrinsic factors is important in neuroscience. Point process models are valuable for capturing such information; however, the process of fully applying these models is not always obvious. A complete model application has four broad steps: specification of the model, estimation of model parameters given observed data, verification of the model using goodness of fit, and characterization of the model using confidence bounds. Of these steps, only the first three have been applied widely in the literature, suggesting the need to dedicate a discussion to how the time-rescaling theorem, in combination with parametric bootstrap sampling, can be generally used to compute confidence bounds of point process models. In our first example, we use a generalized linear model of spiking propensity to demonstrate that confidence bounds derived from bootstrap simulations are consistent with those computed from closed-form analytic solutions. In our second example, we consider an adaptive point process model of hippocampal place field plasticity for which no analytical confidence bounds can be derived. We demonstrate how to simulate bootstrap samples from adaptive point process models, how to use these samples to generate confidence bounds, and how to statistically test the hypothesis that neural representations at two time points are significantly different. These examples have been designed as useful guides for performing scientific inference based on point process models.


Processes ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 298 ◽  
Author(s):  
Axel Schmidt ◽  
Jochen Strube

As of today, industrial process development for liquid-liquid extraction and scale-up of extraction columns is based on an experimental procedure that requires tests in pilot-scale. This methodology consumes large amounts of material and time and the utilized scale-up equations are crude estimates including considerable safety margins. This approach is practical for well-known systems or low-value products coupled with high production scale, where such a scale-up methodology has less impact on the overall profitability. However, for new high-value products in biologics manufacturing, a process development based on process understanding and the use of validated process models is imperative. Therefore, a distinct and quantitative validation workflow for liquid-liquid extraction modeling is presented on the example of two complex feed mixtures. Monte-Carlo simulations based on the presented model parameter determination concept result for both examples in prediction accuracy comparable to the experiments and prediction precision within the deviation of the respective experiments. Identification of statistically significant parameters is demonstrated. The presented methodology for model validation will support the implementation of liquid-liquid extraction in the manufacturing of new high value biological products in regulated industries by providing a workflow to derive a Quality-by-Design compatible process model.


Author(s):  
Daniel McClement ◽  
Nathan P. Lawrence ◽  
Philip Loewen ◽  
Michael Forbes ◽  
Johan Backstrom ◽  
...  

Fixed structure controllers (such as proportional-integral-derivative controllers) are used extensively in industry. Finding a practical and versatile method to tune these controllers, particularly with imprecise process models and limited online computational resources, is an industrially relevant problem which could improve the efficiency of many plants. In this paper, we present two flexible neural network-based approaches capable of tuning any fixed structure controller for any control objective and process model and compare their advantages and disadvantages. The first approach is derived from supervised learning and classical optimization techniques, while the second approach applies techniques used in deep reinforcement learning. Both approaches incorporate model uncertainties when selecting controller parameters, reducing the need for costly experiments to precisely estimate model parameters in a plant. Both methods are also computationally efficient online, enabling their widespread usage.


Author(s):  
Luis Díaz ◽  
Isabel Álava ◽  
Jon Makibar ◽  
Ruth Fernández ◽  
Fernando Cueva ◽  
...  

Spouted beds have a widespread application in the processing industry for efficient contacting of large particles with a gas. However, there is no detailed understanding of the complex behaviour of these systems, especially for fine particles, which means significant scale-up problems in industry. This paper approaches fundamental aspects of the computational fluid dynamic simulation of fine particle spouting. Using the commercial CFD simulation package Fluent (version 6.3), the spouting hydrodynamics of fine particles in a conical spouted bed is simulated and compared with experimental data.Fluent code offers a variety of models to describe the physical phenomena occurring in these kinds of reactors. In some cases, the choice is straightforward, whereas in other cases more than one option is valid a priori, and so the best one has to be selected. The main choices are the Lun et al. (1984) approach for granular kinetics and the Gidaspow model for drag force.In order to validate this selection process, model predictions are compared with experimentally observed hydrodynamic patterns, whereby CFD model parameters can be tuned. It has been proven that after this tuning, the model explains the hydrodynamic behaviour of the bed and the influence of geometric parameters and bed nature (sand, glass beads) concerning spout stability and minimum spouting velocity. Nevertheless, peak pressure drop values predicted by the model are considerably smaller than the experimental values. The model also provides reasonable predictions for spout shape, local bed voidage and air and particle velocities.


Sign in / Sign up

Export Citation Format

Share Document