scholarly journals ZTC bias point of advanced fin based device: The importance and exploration

2015 ◽  
Vol 28 (3) ◽  
pp. 393-405 ◽  
Author(s):  
Sushanta Mohapatra ◽  
Kumar Pradhan ◽  
Prasanna Sahu

The present understanding of this work is about to evaluate and resolve the temperature compensation point (TCP) or zero temperature coefficient (ZTC) point for a sub-20 nm FinFET. The sensitivity of geometry parameters on assorted performances of Fin based device and its reliability over ample range of temperatures i.e. 25?C to 225?C is reviewed to extend the benchmark of device scalability. The impact of fin height (HFin), fin width (WFin), and temperature (T) on immense performance metrics including on-off ratio (Ion/Ioff), transconductance (gm), gain (AV), cut-off frequency (fT), static power dissipation (PD), energy (E), energy delay product (EDP), and sweet spot (gmfT/ID) of the FinFET is successfully carried out by commercially available TCAD simulator SentaurusTM from Synopsis Inc.

2021 ◽  
Author(s):  
Aruna Kumari Neelam ◽  
Prithvi P

Abstract Nanosheet Field Effect Transistor (NSFET) is a viable contender for future scaling in sub-7-nm technology. This paper provides insights into the variations of DC FOMs for different geometrical configurations of the NSFET. In this script, DC performance of 3D GAA NSFET is analyzed by varying the width, thickness of the device. Moreover, the gate length is scaled from 20 nm to 5 nm to check for the device suitability in logic applications. The thickness and width of each nanosheet are varied in the range of 5 to 9 nm, and 10 to 50 nm respectively to analyse the performance dependency on the geometry of the device. The impact of geometry of NSFET on various DC performance metrics like transfer characteristics, sub-threshold swing (SS), on current (ION), off current (IOFF), switching ratio (ION/IOFF), threshold voltage (Vth) and drain induced barrier lowering (DIBL) are studied. On top of that, the device’s electrical characteristics are analyzed for a wide range of temperatures from -43oC to 127oC to identify the temperature compensation point and is observed at VGS = 0.55 V and ID = 3.86 × 10−6 A. Furthermore, the important process parameter, work function variations on transfer characteristics of the device is analyzed. Moreover, the analyses tell that, for sub -7 nm, the NSFET is a potential device for high performance and good logic applications.


2013 ◽  
Vol 569-570 ◽  
pp. 1132-1139 ◽  
Author(s):  
Thomas Siebel ◽  
Mihail Lilov

The sensitivity of the electromechanical impedance to structural damage under varying temperature is investigated in this paper. An approach based on maximizing cross-correlation coefficients is used to compensate temperature effects. The experiments are carried out on an air plane conform carbon fiber reinforced plastic (CFRP) panel (500mm x 500mm x 5mm) instrumented with 26 piezoelectric transducers of two different sizes. In a first step, the panel is stepwise subjected to temperatures between-50 °C and 100 °C. The influence of varying temperatures on the measured impedances and the capability of the temperature compensation approach are analyzed. Next, the sensitivity to a 200 J impact damage is analyzed and it is set in relation to the influence of a temperature change. It becomes apparent the impact of the transducer size and location on the quality of the damage detection. The results further indicate a significant influence of temperature on the measured spectra. However, applying the temperature compensation algorithm can reduce the temperature effect at the same time increasing the transducer sensitivity within its measuring area. The paper concludes with a discussion about the trade-off between the sensing area, where damage should be detected, and the temperature range, in which damage within this area can reliably be detected.


Author(s):  
Peter Gloeckner ◽  
Klaus Dullenkopf ◽  
Michael Flouros

Operating conditions in high speed mainshaft ball bearings applied in new aircraft propulsion systems require enhanced bearing designs and materials. Rotational speeds, loads, demands on higher thrust capability, and reliability have increased continuously over the last years. A consequence of these increasing operating conditions are increased bearing temperatures. A state of the art jet engine high speed ball bearing has been modified with an oil channel in the outer diameter of the bearing. This oil channel provides direct cooling of the outer ring. Rig testing under typical flight conditions has been performed to investigate the cooling efficiency of the outer ring oil channel. In this paper the experimental results including bearing temperature distribution, power dissipation, bearing oil pumping and the impact on oil mass and parasitic power loss reduction are presented.


Author(s):  
B.T. Krishna ◽  
◽  
Shaik. mohaseena Salma ◽  

A flux-controlled memristor using complementary metal–oxide–(CMOS) structure is presented in this study. The proposed circuit provides higher power efficiency, less static power dissipation, lesser area, and can also reduce the power supply by using CMOS 90nm technology. The circuit is implemented based on the use of a second-generation current conveyor circuit (CCII) and operational transconductance amplifier (OTA) with few passive elements. The proposed circuit uses a current-mode approach which improves the high frequency performance. The reduction of a power supply is a crucial aspect to decrease the power consumption in VLSI. An offered emulator in this proposed circuit is made to operate incremental and decremental configurations well up to 26.3 MHZ in cadence virtuoso platform gpdk using 90nm CMOS technology. proposed memristor circuit has very little static power dissipation when operating with ±1V supply. Transient analysis, memductance analysis, and dc analysis simulations are verified practically with the Experimental demonstration by using ideal memristor made up of ICs AD844AN and CA3080, using multisim which exhibits theoretical simulation are verified and discussed.


Author(s):  
Anna Ferrante ◽  
James Boyd ◽  
Sean Randall ◽  
Adrian Brown ◽  
James Semmens

ABSTRACT ObjectivesRecord linkage is a powerful technique which transforms discrete episode data into longitudinal person-based records. These records enable the construction and analysis of complex pathways of health and disease progression, and service use. Achieving high linkage quality is essential for ensuring the quality and integrity of research based on linked data. The methods used to assess linkage quality will depend on the volume and characteristics of the datasets involved, the processes used for linkage and the additional information available for quality assessment. This paper proposes and evaluates two methods to routinely assess linkage quality. ApproachLinkage units currently use a range of methods to measure, monitor and improve linkage quality; however, no common approach or standards exist. There is an urgent need to develop “best practices” in evaluating, reporting and benchmarking linkage quality. In assessing linkage quality, of primary interest is in knowing the number of true matches and non-matches identified as links and non-links. Any misclassification of matches within these groups introduces linkage errors. We present efforts to develop sharable methods to measure linkage quality in Australia. This includes a sampling-based method to estimate both precision (accuracy) and recall (sensitivity) following record linkage and a benchmarking method - a transparent and transportable methodology to benchmark the quality of linkages across different operational environments. ResultsThe sampling-based method achieved estimates of linkage quality that were very close to actual linkage quality metrics. This method presents as a feasible means of accurately estimating matching quality and refining linkages in population level linkage studies. The benchmarking method provides a systematic approach to estimating linkage quality with a set of open and shareable datasets and a set of well-defined, established performance metrics. The method provides an opportunity to benchmark the linkage quality of different record linkage operations. Both methods have the potential to assess the inter-rater reliability of clerical reviews. ConclusionsBoth methods produce reliable estimates of linkage quality enabling the exchange of information within and between linkage communities. It is important that researchers can assess risk in studies using record linkage techniques. Understanding the impact of linkage quality on research outputs highlights a need for standard methods to routinely measure linkage quality. These two methods provide a good start to the quality process, but it is important to identify standards and good practices in all parts of the linkage process (pre-processing, standardising activities, linkage, grouping and extracting).


In Financial Systems, the impact of Free Cash Flow (FCF) on the performance of a company has been in the center of academic discourse in recent years. Several studies have tried to ascertain the nature and magnitude of the relationship between free cash flow and firm profitability with conflicting results coming from different scholars. The main objective of this research work was to examine the impact of FCF on the profitability of quoted manufacturing firms in the Nigerian and Ghana stock exchanges. Data were pooled from twenty (20) different companies (ten each from Nigeria and Ghana) for a period of six years (2012 – 2017). A panel data estimation model was used to measure the impact of FCF and other performance metrics on the Return on Assets (ROA), which is our chosen profitability measure. The results show a positive but insignificant relationship between FCF and ROA both for Ghana and Nigerian manufacturing firms. Also, sales growth showed a positive impact on profitability of both countries while leverage negatively impacted on profitability. with Ghana being significant at 5%. The implication of the findings of the study is that it makes no business sense for companies to keep piling up excess funds beyond that which is needed for transactional purposes. The similarity between the results from Ghana and Nigeria in most of the variables shows that the findings of this study can be generalized to other countries. Based on the findings of the study, we recommend that the management of companies should strive to keep only the minimum needed free cash flow while the rest should be invested in other projects with positive net present value


2021 ◽  
Author(s):  
Haleh Khojasteh

The focus of this thesis is solving the problem of resource allocation in cloud datacenter using an Infrastructure-as-a-Service (IaaS) cloud model. We have investigated the behavior of IaaS cloud datacenters through detailed analytical and simulation models that model linear, transitional and saturated operation regimes. We have obtained accurate performance metrics such as task blocking probability, total delay, utilization and energy consumption. Our results show that the offered load does not offer complete characterization of datacenter operation; therefore, in our evaluations, we have considered the impact of task arrival rate and task service time separately. To keep the cloud system in the linear operation regime, we have proposed several dynamic algorithms to control the admission of incoming tasks. In our first solution, task admission is based on task blocking probability and predefined thresholds for task arrival rate. The algorithms in our second solution are based on full rate task acceptance threshold and filtering coefficient. Our results confirm that the proposed task admission mechanisms are capable of maintaining the stability of cloud system under a wide range of input parameter values. Finally, we have developed resource allocation solutions for mobile clouds in which offloading requests from a mobile device can lead to forking of new tasks in on-demand manner. To address this problem, we have proposed two flexible resource allocation mechanisms with different prioritization: one in which forked tasks are given full priority over newly arrived ones, and another in which a threshold is established to control the priority. Our results demonstrate that threshold-based priority scheme presents better system performance than the full priority scheme. Our proposed solution for clouds with mobile users can be also applied in other clouds which their users’ applications fork new tasks.


In this research paper compare the protocol’s performance together with the experimental results of optimal routing using real-life scenarios of vehicles and pedestrians roaming in a city. In this research paper, conduct several simulation comparison experiments(in the NS2 Software) to show the impact of changing buffer capacity, packet lifetime, packet generation rate, and number of nodes on the performance metrics. This research paper is concluded by providing guidelines to develop an efficient DTN routing protocol. To the best of researcher(Parameswari et al.,) knowledge, this work is the first to provide a detailed performance comparison among the diverse collection of DTN routing protocols.


2021 ◽  
Author(s):  
Haleh Khojasteh

The focus of this thesis is solving the problem of resource allocation in cloud datacenter using an Infrastructure-as-a-Service (IaaS) cloud model. We have investigated the behavior of IaaS cloud datacenters through detailed analytical and simulation models that model linear, transitional and saturated operation regimes. We have obtained accurate performance metrics such as task blocking probability, total delay, utilization and energy consumption. Our results show that the offered load does not offer complete characterization of datacenter operation; therefore, in our evaluations, we have considered the impact of task arrival rate and task service time separately. To keep the cloud system in the linear operation regime, we have proposed several dynamic algorithms to control the admission of incoming tasks. In our first solution, task admission is based on task blocking probability and predefined thresholds for task arrival rate. The algorithms in our second solution are based on full rate task acceptance threshold and filtering coefficient. Our results confirm that the proposed task admission mechanisms are capable of maintaining the stability of cloud system under a wide range of input parameter values. Finally, we have developed resource allocation solutions for mobile clouds in which offloading requests from a mobile device can lead to forking of new tasks in on-demand manner. To address this problem, we have proposed two flexible resource allocation mechanisms with different prioritization: one in which forked tasks are given full priority over newly arrived ones, and another in which a threshold is established to control the priority. Our results demonstrate that threshold-based priority scheme presents better system performance than the full priority scheme. Our proposed solution for clouds with mobile users can be also applied in other clouds which their users’ applications fork new tasks.


Sign in / Sign up

Export Citation Format

Share Document