small step size
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 1)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Muhammad Shah Jahan ◽  
Habib Ullah Khan ◽  
Shahzad Akbar ◽  
Muhammad Umar Farooq ◽  
Sarah Gul ◽  
...  

In transfer learning, two major activities, i.e., pretraining and fine-tuning, are carried out to perform downstream tasks. The advent of transformer architecture and bidirectional language models, e.g., bidirectional encoder representation from transformer (BERT), enables the functionality of transfer learning. Besides, BERT bridges the limitations of unidirectional language models by removing the dependency on the recurrent neural network (RNN). BERT also supports the attention mechanism to read input from any side and understand sentence context better. It is analyzed that the performance of downstream tasks in transfer learning depends upon the various factors such as dataset size, step size, and the number of selected parameters. In state-of-the-art, various research studies produced efficient results by contributing to the pretraining phase. However, a comprehensive investigation and analysis of these research studies is not available yet. Therefore, in this article, a systematic literature review (SLR) is presented investigating thirty-one (31) influential research studies published during 2018–2020. Following contributions are made in this paper: (1) thirty-one (31) models inspired by BERT are extracted. (2) Every model in this paper is compared with RoBERTa (replicated BERT model) having large dataset and batch size but with a small step size. It is concluded that seven (7) out of thirty-one (31) models in this SLR outperforms RoBERTa in which three were trained on a larger dataset while the other four models are trained on a smaller dataset. Besides, among these seven models, six models shared both feedforward network (FFN) and attention across the layers. Rest of the twenty-four (24) models are also studied in this SLR with different parameter settings. Furthermore, it has been concluded that a pretrained model with a large dataset, hidden layers, attention heads, and small step size with parameter sharing produces better results. This SLR will help researchers to pick a suitable model based on their requirements.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Zhangkai Luo ◽  
Zhongmin Pei ◽  
Xinmin Wang ◽  
Yinan Li ◽  
Bo Zou

In this paper, a transmission scheme based on polarization filtering and weighted fractional Fourier transform (PF-WFRFT) is proposed to enhance the transmission security in wireless communications. Indeed, the distribution of the transmit signals processed by WFRFT can be close to Gaussian, which can significantly improve the low detection probability. However, through scanning the WFRFT order with small step size, an eavesdropper can restore a regular constellation and crack the information. To overcome the problem, in the PF-WFRFT scheme, two polarized signals with mutually orthogonal polarization state are utilized to convey the information, which are processed by WFRFT separately and added up linearly before being transmitted by dual-polarized antennas. In this manner, even by scanning the WFRFT order, recovered signals are composite ones, which make the WFRFT order and the signals’ PSs difficult to crack, thus improving the security. In addition, the polarization-dependent loss (PDL) effect on the proposed scheme is discussed and a proprocessing matrix based on the channel information is constructed to eliminate this effect. Finally, numerical results are given to demonstrate the security performance of the proposed scheme in wireless communications.


2019 ◽  
Vol 5 (1) ◽  
pp. 63
Author(s):  
Mohamad Riyadi ◽  
Daswa Daswa

The aim of this study is to derive the approximation solution of the non-autonomous logistic equation with a non-constant carrying capacity. The solution is found via predictor-corrector method (Adams-Bashforth-Moulton method, Milne method and Hamming method). The approximation solution that obtained, then, is compared to the exact solution. The results show that, for small step size, the approximation solution approximate the exact solution is in good agreement.


Author(s):  
B. Ruf ◽  
B. Erdnuess ◽  
M. Weinmann

With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.


2011 ◽  
Vol 30 (12) ◽  
pp. 2364-2372 ◽  
Author(s):  
George Sirinakis ◽  
Cedric R Clapier ◽  
Ying Gao ◽  
Ramya Viswanathan ◽  
Bradley R Cairns ◽  
...  

2011 ◽  
Vol 100 (3) ◽  
pp. 354a
Author(s):  
George Sirinakis ◽  
Cedric R. Clapier ◽  
Ying Gao ◽  
Ramya Viswanathan ◽  
Bradley R. Cairns ◽  
...  

2011 ◽  
Vol 1 (3) ◽  
Author(s):  
Trump Tõnu

AbstractThis paper studies output statistics of an adaptive line enhancer that is based on an affine combination of two NLMS adaptive filters. Combination of adaptive filters is a new interesting way of improving the performance of adaptive algorithms. The structure consists of two adaptive filters that adapt on the same input signal, one with a large and the other one with a small step size. Such a combination is capable of achieving fast initial convergence and small steady state error at the same time. In this paper we investigate the second order statistics of the output signal of adaptive line enhancer based on the combination in steady state. The result is given in terms of the parameters of the adaptive combination, input process statistics, and the optimal Wiener filter weights for the problem at hands.


Sign in / Sign up

Export Citation Format

Share Document