scholarly journals Parameter Estimation of Turbo Code Encoder

2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Mehdi Teimouri ◽  
Ahmadreza Hedayat

The problem of reconstruction of a channel code consists of finding out its design parameters solely based on its output. This paper investigates the problem of reconstruction of parallel turbo codes. Reconstruction of a turbo code has been addressed in the literature assuming that some of the parameters of the turbo encoder, such as the number of input and output bits of the constituent encoders and puncturing pattern, are known. However in practical noncooperative situations, these parameters are unknown and should be estimated before applying reconstruction process. Considering such practical situations, this paper proposes a novel method to estimate the above-mentioned code parameters. The proposed algorithm increases the efficiency of the reconstruction process significantly by judiciously reducing the size of search space based on an analysis of the observed channel code output. Moreover, simulation results show that the proposed algorithm is highly robust against channel errors when it is fed with noisy observations.

2011 ◽  
Vol 7 (4) ◽  
pp. 128 ◽  
Author(s):  
Fulvio Babich ◽  
Francesca Vatta

In certain applications the user has to cope with some random packet erasures due, e.g., to deep fading conditions on wireless links, or to congestion on wired networks. In other applications, the user has to cope with a pure wireless link, in which all packets are available to him, even if seriously corrupted. The ARQ/FEC schemes already studied and presented in the literature are well optimized only for one of these two applications. In a previous work, the authors aimed at bridging this gap, giving a design method for obtaining hybrid ARQ schemes that perform well in both conditions, i.e., at the presence of packet erasures and packet fading. This scheme uses a channel coding system based on partially-systematic periodically punctured turbo codes. Since the computation of the transfer function and, consequently, the union bound on the Bit or Frame Error Rate of a partiallysystematic punctured turbo code becomes highly intensive as the interleaver size and the puncturing period increase, in this work a simplified and more efficient method to calculate the most significant terms of the average distance spectrum of the turbo encoder is proposed and validated.


Author(s):  
SANTOSH GOORU ◽  
DR. S. RAJARAM

Recent wireless communication standards such as 3GPP-LTE, WiMax, DVB-SH and HSPA incorporates turbo code for its excellent performance. This work provides an overview of the novel class of channel codes referred to as turbo codes, which have been shown to be capable of performing close to the Shannon Limit. It starts with a brief discussion on turbo encoding, and then move on to describing the form of the iterative decoder most commonly used to decode turbo codes. Here, Turbo decoder uses original MAP algorithm instead of using the approximated Max log-MAP algorithm thereby it reduces the number iterations to decode the transmitted information bits. This paper presents the FPGA (Field Programmable Gate Array) implementation simulation results for Turbo encoder and decoder structure for 3GPP-LTE standard.


2012 ◽  
Vol 588-589 ◽  
pp. 765-768
Author(s):  
Jin Xu ◽  
Ying Zhao ◽  
Shu Qiang Duan

Turbo Code is a channel coding with excellent error-correcting performance in the condition of low noise-signal ratio.It has a superior decoding performance approaching the Shannon limit by adopting the random coding and decoding. This paper focuses on Turbo code and its implementation with FPGA and deeply analyzes the decoding theory and algorithm of Turbo code. Firstly, it analyzes the decoding theory of Turbo code. Then, it discusses key issues in the process of implementation with the most excellent and complicated Max—log—MAP algorithm. At last, it ends up with the Turbo encoder and decoding algorithm which hardware is successfully implemented finally.


Author(s):  
Izabella Lokshina

This paper examines turbo codes that are currently introduced in many international standards, including the UMTS standard for third generation personal communications and the ETSI DVB-T standard for Terrestrial Digital Video Broadcasting. The convergence properties of the iterative decoding process associated with a given turbo-coding scheme are estimated using the analysis technique based on so-called extrinsic information transfer (EXIT) chart. This approach provides a possibility to anticipate the bit error rate (BER) of a turbo code system using only the EXIT chart. It is shown that EXIT charts are powerful tools to analyze and optimize the convergence behavior of iterative systems utilizing the turbo principle. The idea is to consider the associated SISO stages as information processors that map input a priori LLR’s onto output extrinsic LLR’s, the information content being obviously assumed to increase from input to output, and introduce them to the design of turbo systems without the reliance on extensive simulation. Compared with the other methods for generating EXIT functions, the suggested approach provides insight into the iterative behavior of linear turbo systems with substantial reduction in numerical complexity.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Amer Awad Alzaidi ◽  
Musheer Ahmad ◽  
Hussam S. Ahmed ◽  
Eesa Al Solami

This paper proposes a novel method of constructing strong substitution-boxes (S-boxes) of order n (4 ≤ n ≤ 8) based on a recent optimization algorithm known as sine-cosine algorithm (SCA). The paper also proposes a new 1D chaotic map, which owns enhanced dynamics compared to conventional chaotic map, for generating initial population of S-boxes and facilitating the optimization mechanism of SCA. The proposed method applies the SCA with enhanced chaotic map to explore and exploit the search space for obtaining optimized S-boxes on the basis of maximization of nonlinearity as fitness function. The S-box construction involves three phases such as initialization of population, optimization, and adjustment. The simulation and performance analyses are done using standard measures of nonlinearity, strict avalanche criterion, bits independence criterion, differential uniformity, linear approximation probability, and autocorrelation function. The obtained experimental results are compared with some immediate optimization-based and other S-boxes to show the strength of proposed method for constructing bijective S-boxes of salient cryptographic features.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Robin Singh ◽  
Anu Agarwal ◽  
Brian W. Anthony

AbstractNanophotonics is a rapidly emerging field in which complex on-chip components are required to manipulate light waves. The design space of on-chip nanophotonic components, such as an optical meta surface which uses sub-wavelength meta-atoms, is often a high dimensional one. As such conventional optimization methods fail to capture the global optimum within the feasible search space. In this manuscript, we explore a Machine Learning (ML)-based method for the inverse design of the meta-optical structure. We present a data-driven approach for modeling a grating meta-structure which performs photonic beam engineering. On-chip planar photonic waveguide-based beam engineering offers the potential to efficiently manipulate photons to create excitation beams (Gaussian, focused and collimated) for lab-on-chip applications of Infrared, Raman and fluorescence spectroscopic analysis. Inverse modeling predicts meta surface design parameters based on a desired electromagnetic field outcome. Starting with the desired diffraction beam profile, we apply an inverse model to evaluate the optimal design parameters of the meta surface. Parameters such as the repetition period (in 2D axis), height and size of scatterers are calculated using a feedforward deep neural network (DNN) and convolutional neural network (CNN) architecture. A qualitative analysis of the trained neural network, working in tandem with the forward model, predicts the diffraction profile with a correlation coefficient as high as 0.996. The developed model allows us to rapidly estimate the desired design parameters, in contrast to conventional (gradient descent based or genetic optimization) time-intensive optimization approaches.


Frequenz ◽  
2015 ◽  
Vol 69 (3-4) ◽  
Author(s):  
Saqib Ejaz ◽  
Feng-Fan Yang

AbstractThe parallel encoding and decoding structure of turbo codes makes them natural candidate for coded-cooperative scenarios. In this paper, we focus on one of the key components of turbo codes i.e., interleaver, and analyze its effect on the performance of coded-cooperative communication. The impact of an interleaver on the overall performance of cooperative systems depends on the type of an interleaver and its location in the cooperative encoding scheme. We consider code matched interleaver (CMI) as an optimum choice and present its role in a coded-cooperation scenario. The search and convergence of CMI for long interleaver sizes is an issue; therefore, a modification in the search conditions is included without any compromise on the performance of CMI. We also present analytical method to determine maximum S-constraint length for a CMI design. Further, we analyze the performance of two different encoding schemes of turbo codes, i.e., distributed turbo code (DTC) and distributed multiple turbo code (DMTC) after inclusion of CMI. Monte Carlo simulations show that CMI increases the diversity gain relative to other conventional interleavers such as uniform random interleaver. The channel is assumed to be Rayleigh fading among all communication nodes.


Author(s):  
Jenn-Long Liu ◽  

Particle swarm optimization (PSO) is a promising evolutionary approach related to a particle moves over the search space with velocity, which is adjusted according to the flying experiences of the particle and its neighbors, and flies towards the better and better search area over the course of search process. Although the PSO is effective in solving the global optimization problems, there are some crucial user-input parameters, such as cognitive and social learning rates, affect the performance of algorithm since the search process of a PSO algorithm is nonlinear and complex. Consequently, a PSO with well-selected parameter settings may result in good performance. This work develops an evolving PSO based on the Clerc’s PSO to evaluate the fitness of objective function and a genetic algorithm (GA) to evolve the optimal design parameters to provide the usage of PSO. The crucial design parameters studied herein include the cognitive and social learning rates as well as constriction factor for the Clerc’s PSO. Several benchmarking cases are experimented to generalize a set of optimal parameters via the evolving PSO. Furthermore, the better parameters are applied to the engineering optimization of a pressure vessel design.


2014 ◽  
Vol 1006-1007 ◽  
pp. 764-767
Author(s):  
Hao Xiang Wang ◽  
Shan Yue ◽  
Yang Li

This paper proposes a new method for vector quantization by minimizing the Divergence of Kullback-Leibler between the class label distributions over the quantization inputs, which are original vectors, and the output, which is the quantization subsets of the vector set. In this way, the vector quantization output can keep as much information of the class label as possible. An objective function is constructed and we developed an iterative algorithm to minimize it as well. The novel method is evaluated on bag-of-features based image classification problems.


Sign in / Sign up

Export Citation Format

Share Document