scholarly journals Shift Register Initialization in Scalar Replacement for Reducing Code Size

2020 ◽  
Vol 13 (0) ◽  
pp. 2-9
Author(s):  
Kenshu Seto
Keyword(s):  
Author(s):  
Chikara HAMANAKA ◽  
Ryosuke YAMAMOTO ◽  
Jun FURUTA ◽  
Kanto KUBOTA ◽  
Kazutoshi KOBAYASHI ◽  
...  

2009 ◽  
Vol 28 (10) ◽  
pp. 2704-2706 ◽  
Author(s):  
Chao MA ◽  
Yu-zhen LU

Author(s):  
A. Suresh Babu ◽  
B. Anand

: A Linear Feedback Shift Register (LFSR) considers a linear function typically an XOR operation of the previous state as an input to the current state. This paper describes in detail the recent Wireless Communication Systems (WCS) and techniques related to LFSR. Cryptographic methods and reconfigurable computing are two different applications used in the proposed shift register with improved speed and decreased power consumption. Comparing with the existing individual applications, the proposed shift register obtained >15 to <=45% of decreased power consumption with 30% of reduced coverage area. Hence this proposed low power high speed LFSR design suits for various low power high speed applications, for example wireless communication. The entire design architecture is simulated and verified in VHDL language. To synthesis a standard cell library of 0.7um CMOS is used. A custom design tool has been developed for measuring the power. From the results, it is obtained that the cryptographic efficiency is improved regarding time and complexity comparing with the existing algorithms. Hence, the proposed LFSR architecture can be used for any wireless applications due to parallel processing, multiple access and cryptographic methods.


2021 ◽  
Vol 21 (S2) ◽  
Author(s):  
Feihong Yang ◽  
Xuwen Wang ◽  
Hetong Ma ◽  
Jiao Li

Abstract Background Transformer is an attention-based architecture proven the state-of-the-art model in natural language processing (NLP). To reduce the difficulty of beginning to use transformer-based models in medical language understanding and expand the capability of the scikit-learn toolkit in deep learning, we proposed an easy to learn Python toolkit named transformers-sklearn. By wrapping the interfaces of transformers in only three functions (i.e., fit, score, and predict), transformers-sklearn combines the advantages of the transformers and scikit-learn toolkits. Methods In transformers-sklearn, three Python classes were implemented, namely, BERTologyClassifier for the classification task, BERTologyNERClassifier for the named entity recognition (NER) task, and BERTologyRegressor for the regression task. Each class contains three methods, i.e., fit for fine-tuning transformer-based models with the training dataset, score for evaluating the performance of the fine-tuned model, and predict for predicting the labels of the test dataset. transformers-sklearn is a user-friendly toolkit that (1) Is customizable via a few parameters (e.g., model_name_or_path and model_type), (2) Supports multilingual NLP tasks, and (3) Requires less coding. The input data format is automatically generated by transformers-sklearn with the annotated corpus. Newcomers only need to prepare the dataset. The model framework and training methods are predefined in transformers-sklearn. Results We collected four open-source medical language datasets, including TrialClassification for Chinese medical trial text multi label classification, BC5CDR for English biomedical text name entity recognition, DiabetesNER for Chinese diabetes entity recognition and BIOSSES for English biomedical sentence similarity estimation. In the four medical NLP tasks, the average code size of our script is 45 lines/task, which is one-sixth the size of transformers’ script. The experimental results show that transformers-sklearn based on pretrained BERT models achieved macro F1 scores of 0.8225, 0.8703 and 0.6908, respectively, on the TrialClassification, BC5CDR and DiabetesNER tasks and a Pearson correlation of 0.8260 on the BIOSSES task, which is consistent with the results of transformers. Conclusions The proposed toolkit could help newcomers address medical language understanding tasks using the scikit-learn coding style easily. The code and tutorials of transformers-sklearn are available at https://doi.org/10.5281/zenodo.4453803. In future, more medical language understanding tasks will be supported to improve the applications of transformers_sklearn.


2020 ◽  
Vol 11 (1) ◽  
pp. 129
Author(s):  
Po-Yu Kuo ◽  
Ming-Hwa Sheu ◽  
Chang-Ming Tsai ◽  
Ming-Yan Tsai ◽  
Jin-Fa Lin

The conventional shift register consists of master and slave (MS) latches with each latch receiving the data from the previous stage. Therefore, the same data are stored in two latches separately. It leads to consuming more electrical power and occupying more layout area, which is not satisfactory to most circuit designers. To solve this issue, a novel cross-latch shift register (CLSR) scheme is proposed. It significantly reduced the number of transistors needed for a 256-bit shifter register by 48.33% as compared with the conventional MS latch design. To further verify its functions, this CLSR was implemented by using TSMC 40 nm CMOS process standard technology. The simulation results reveal that the proposed CLSR reduced the average power consumption by 36%, cut the leakage power by 60.53%, and eliminated layout area by 34.76% at a supply voltage of 0.9 V with an operating frequency of 250 MHz, as compared with the MS latch.


Author(s):  
Anderson Faustino da Silva ◽  
Bruno Conde Kind ◽  
Jose Wesley de Souza Magalhaes ◽  
Jeronimo Nunes Rocha ◽  
Breno Campos Ferreira Guimaraes ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document