Modeling the Melting and Dissolution Stages During Thermal Processing of Intermetallic Coatings from Layered Precursors

2005 ◽  
Vol 127 (1) ◽  
pp. 148-156 ◽  
Author(s):  
Marios Alaeddine ◽  
Rajesh Ranganathan ◽  
Teiichi Ando ◽  
Charalabos C. Doumanidis

This paper presents a simple analytical model of the temperature and concentration dynamic distributions during thermal processing of intermetallic and metal-matrix composite coatings, such as nickel aluminide coatings on steel substrates, by melting, e.g., preplated aluminum/nickel layers using a moving heat source such as a plasma arc. Such a source of Gaussian power distribution scans the surface of the coating, giving rise to the temperature evolution and component dissolution during the thermal melting and reaction process. The model is based on a system of lumped energy and mass balances, and convolution expressions of distributed temperature and concentration Green’s fields (accounting for the orientation of their gradient and decomposing heat and mass transfer across the coating from substrate conduction), and is solved numerically in real-time. The simulation results are validated on Ni–Al coatings processed using a robotic plasma arc laboratory station, through in-process infrared thermal sensing and off-line metallographic analysis. It is shown that the predicted temperature and dissolution penetration values compare well with the experimentally obtained results, therefore supporting the model as a real-time basis for design and/or adaptation of a thermal control system for the coating process.

2002 ◽  
Vol 750 ◽  
Author(s):  
Tarek Alaeddine ◽  
Rajesh Ranganathan ◽  
Teiichi Ando ◽  
Charalabos C. Doumanidis ◽  
Peter Y. Wong

ABSTRACTNickel aluminide coatings were produced on steel substrates by reactive thermal processing of pre-plated precursor layers of nickel and aluminum using plasma arc as the heat source. Controlled rapid heating melted the outer aluminum layer, which then dissolved nickel to facilitate the nucleation and growth of a nickel aluminide. The resultant coating microstructures varied from a duplex or triplex structure, consisting of either NiAl3 and a eutectic; Ni2Al3, NiAl3 and a eutectic; to a fully monolithic Ni2Al3 structure, with the latter resulting at high heat input rates and/or low heat-source traverse rates. The temperature of the reaction layer was simulated for the experimental conditions by a numerical model based on Green's function analysis. The nickel concentration at the liquid-solid interface just before any nickel aluminide nucleation was calculated by assuming local equilibrium interface conditions between the liquid layer and the fcc nickel-rich solution. The depth of nickel dissolution, which consequently determines the extent of nickel aluminide growth, was also predicted by the model. Numerical results of the nickel dissolution compared well with experimental observations.


2006 ◽  
Vol 129 (1) ◽  
pp. 56-65 ◽  
Author(s):  
Marios Alaeddine ◽  
Rajesh Ranganathan ◽  
Teiichi Ando ◽  
Charalabos C. Doumanidis

Successful fabrication of intermetallic coatings on surfaces of manufacturing interest involves regulation of the temperature/concentration dynamic distributions that develop in the molten layer during the thermal and reaction process. Modeling the spatio-temporal dynamics of this metallurgical process, however, requires partial differential equations that are cumbersome to solve on-line, as part of a real time reference model to the controller. To this end, we present a computationally parallel and meshless model (i.e., decoupled with the capability to be solved numerically in real time) to decipher the dynamics of the thermal coating process and to permit real time monitoring and control of the resulting coating microstructure. The analytical model is based on kinetic growth theories, lumped energy and mass balances, and convolution expressions of distributed temperature and concentration Green’s fields (accounting for the orientation of their gradient and decomposing heat and mass transfer across the coating from substrate conduction). The model is validated with nickel aluminide coatings processed on a robotic plasma arc laboratory station, through in-process infrared thermal sensing and off-line metallographic analysis. A Monte Carlo sample control scheme, that involves on-line parameter identification and model adaptation, is also developed using the model as an in-process observer for successful production of binary metal system coatings that exhibit the desired microstructure geometry and characteristics.


1995 ◽  
Vol 117 (4) ◽  
pp. 625-632 ◽  
Author(s):  
C. C. Doumanidis

A variety of geometric, material structure, and stress/distortion attributes are needed to characterize the quality of thermally manufactured products. Because of in-process sensing difficulties and transportation lags, these features must be regulated in real time through appropriate thermal outputs, measured by non-contact infrared pyrometry. In thermal processes with a localized, sequentially moving heat source, the necessary heat input distribution on the part surface is supplied by an innovative timeshared or scanned torch modulation, in a raster or vector pattern. A unified lumped multivariable and a distributed-parameter quasilinear modeling formulation provide a design methodology and real-time reference for the development of finite- or infinite-state adaptive thermal control systems. These controllers modulate the power and motion of a single torch, supplying distinct concentrated heat inputs or a continuous power distribution on the part surface, so as to obtain the specified thermal characteristics or the entire temperature field. These regulation strategies are computationally tested and implemented experimentally in arc welding, but their applicability can be extended to a variety of thermal manufacturing processes.


Author(s):  
Amarjeet Jhajharia ◽  
Uma Kumari ◽  
Nitesh Chouhan ◽  
Yogesh Meena

Background : In present scenario electricity theft is a major problem for government. This problem affects Indian economy since GDP goes down due to theft. Electricity is a part of living and its theft effects the common man because indirectly common man has to pay for the theft done by someone else in form of extra charges. Methods: In this paper, we present a novel identification pattern-based energy fault detector, by leveraging the customers' normal and faulty line. The target of this paper is to implement a system to monitor the readings and to find the fault in power line in real time. Manipulations in readings of meter are impossible. Results: There is a real time monitoring of meters. Readings could be provided to customers on daily, weekly, monthly or yearly basis. As soon as there occur a fault, it could be rectified because detection is on real time basis and further message can be sent to customer about the fault. Conclusion: A comparison of developed real time monitoring system is done with GSM meters and analog meters.


Energies ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 593
Author(s):  
Moiz Muhammad ◽  
Holger Behrends ◽  
Stefan Geißendörfer ◽  
Karsten von Maydell ◽  
Carsten Agert

With increasing changes in the contemporary energy system, it becomes essential to test the autonomous control strategies for distributed energy resources in a controlled environment to investigate power grid stability. Power hardware-in-the-loop (PHIL) concept is an efficient approach for such evaluations in which a virtually simulated power grid is interfaced to a real hardware device. This strongly coupled software-hardware system introduces obstacles that need attention for smooth operation of the laboratory setup to validate robust control algorithms for decentralized grids. This paper presents a novel methodology and its implementation to develop a test-bench for a real-time PHIL simulation of a typical power distribution grid to study the dynamic behavior of the real power components in connection with the simulated grid. The application of hybrid simulation in a single software environment is realized to model the power grid which obviates the need to simulate the complete grid with a lower discretized sample-time. As an outcome, an environment is established interconnecting the virtual model to the real-world devices. The inaccuracies linked to the power components are examined at length and consequently a suitable compensation strategy is devised to improve the performance of the hardware under test (HUT). Finally, the compensation strategy is also validated through a simulation scenario.


2021 ◽  
Author(s):  
Ramien Sereshk

It is commonly assumed that the persistence model, using day-old monitoring results, will provide accurate estimates of real-time bacteriological concentrations in beach water. However, the persistence model frequently provides incorrect results. This study: 1. develops a site-specific predictive model, based on factors significantly influencing water quality at Beachway Park; 2. determines the feasibility of the site-specific predictive model for use in accurately predicting near real-time E. coli levels. A site-specific predictive model, developed for Beachway Park, was evaluated and the results were compared to the persistence model. This critical performance evaluation helped to identify the inherent inaccuracy of the persistence model for Beachway Park, which renders it an unacceptable approach for safeguarding public health from recreational water-borne illnesses. The persistence model, supplemented with a site-specific predictive model, is recommended as a feasible method to accurately predict bacterial levels in water on a near real-time basis.


Author(s):  
Prajwal Chandrakant Sapkal

In this project, we are going to present a system for sleep detection alarm to monitor the driver, based on the real time surveillance and alert him as well as post it at remote location whenever it’s necessary using cloud platform. This device is to be developed using the Raspberry Pi, Open CV library and camera module. The required coding part of the project will be done using Python language. The main component of the project will be pretrained landmark detector as a software part. It identifies 68 points on the human face. The Dlib’s landmark will detect 68 facial landmarks which enables us to extract the various facial structures using simple Python array slices. The facial landmarks of fully closed eye and a fully opened eye will be first plotted. This data is further processed and tested with some results which will give the information about driver’s alertness. Once the facial landmarks associated with an eye are determined, we can apply the Eye Aspect Ratio (EAR) algorithm. In our case, we’ll be monitoring the eye aspect ratio to see if the values of the facial landmarks, thus implying that the driver/user has closed their eyes or distracted from driving or yawn. Once implemented, our algorithm will start by localising the facial landmarks on real time basis. We can then will be able to monitor the eye aspect ratio to determine if the eyes are close or nearly close which will be the indicator for driver is falling asleep. And then finally raising an alarm if the eye aspect ratio is below a pre-defined threshold for a sufficiently long amount of time. The alarm will be loud enough to wake up the driver and bring back his attention. At the same time data is passed to remote location using cloud whenever it’s necessary.


1988 ◽  
Vol 4 (03) ◽  
pp. 197-215
Author(s):  
Richard L. DeVries

The use of computers to improve the productivity of U.S. shipyards has never been as successful as hoped for by the designers. Many applications were simply the conversion of an existing process to a computerized process. The manufacturing database required for the successful application of computer-aided process planning (CAPP) to the shipyard environment requires a "back-to-basics" approach, one that can lead to control of the processes occurring in the fabrication and assembly shops of a shipyard. The manufacturing database will not provide management feedback designed for the financial segment of the shipyard (although it can be converted to be fully applicable): it provides "real-time" manufacturing data that the shop floor manager can utilize in his day-to-day decisions, not historical data on how his shop did last week or last month. The computer is only a tool to be used to organize the mountains of manufacturing data into useful information for today's shop manager on a "real time" basis. The use of group technology to collect similar products, the use of parameters to clearly identify work content, the use of real-time efficiency rates to project capacity and realistic schedules, and the use of bar codes to input "real time" data are all tools that are part of the process—tools for the shop floor manager of tomorrow.


2013 ◽  
pp. 171-180 ◽  
Author(s):  
Rajesh Ranganathan ◽  
Olga Vayena ◽  
Teiichi Ando ◽  
Charalabos C. Doumanidis ◽  
Craig A. Blue

Sign in / Sign up

Export Citation Format

Share Document