scholarly journals Genetic Algorithms to Maximize the Relevant Mutual Information in Communication Receivers

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1434
Author(s):  
Jan Lewandowsky ◽  
Sumedh Jitendra Dongare ◽  
Rocío Martín Lima ◽  
Marc Adrat ◽  
Matthias Schrammen ◽  
...  

The preservation of relevant mutual information under compression is the fundamental challenge of the information bottleneck method. It has many applications in machine learning and in communications. The recent literature describes successful applications of this concept in quantized detection and channel decoding schemes. The focal idea is to build receiver algorithms intended to preserve the maximum possible amount of relevant information, despite very coarse quantization. The existent literature shows that the resulting quantized receiver algorithms can achieve performance very close to that of conventional high-precision systems. Moreover, all demanding signal processing operations get replaced with lookup operations in the considered system design. In this paper, we develop the idea of maximizing the preserved relevant information in communication receivers further by considering parametrized systems. Such systems can help overcome the need of lookup tables in cases where their huge sizes make them impractical. We propose to apply genetic algorithms which are inspired from the natural evolution of the species for the problem of parameter optimization. We exemplarily investigate receiver-sided channel output quantization and demodulation to illustrate the notable performance and the flexibility of the proposed concept.

2012 ◽  
Vol 17 (4) ◽  
pp. 241-244
Author(s):  
Cezary Draus ◽  
Grzegorz Nowak ◽  
Maciej Nowak ◽  
Marcin Tokarski

Abstract The possibility to obtain a desired color of the product and to ensure its repeatability in the production process is highly desired in many industries such as printing, automobile, dyeing, textile, cosmetics or plastics industry. So far, most companies have traditionally used the "manual" method, relying on intuition and experience of a colorist. However, the manual preparation of multiple samples and their correction can be very time consuming and expensive. The computer technology has allowed the development of software to support the process of matching colors. Nowadays, formulation of colors is done with appropriate equipment (colorimeters, spectrophotometers, computers) and dedicated software. Computer-aided formulation is much faster and cheaper than manual formulation, because fewer corrective iterations have to be carried out, to achieve the desired result. Moreover, the colors are analyzed with regard to the metamerism, and the best recipe can be chosen, according to the specific criteria (price, quantity, availability). Optimaization problem of color formulation can be solved in many diferent ways. Authors decided to apply genetic algorithms in this domain.


Author(s):  
Nguyen N. Tran ◽  
Ha X. Nguyen

A capacity analysis for generally correlated wireless multi-hop multi-input multi-output (MIMO) channels is presented in this paper. The channel at each hop is spatially correlated, the source symbols are mutually correlated, and the additive Gaussian noises are colored. First, by invoking Karush-Kuhn-Tucker condition for the optimality of convex programming, we derive the optimal source symbol covariance for the maximum mutual information between the channel input and the channel output when having the full knowledge of channel at the transmitter. Secondly, we formulate the average mutual information maximization problem when having only the channel statistics at the transmitter. Since this problem is almost impossible to be solved analytically, the numerical interior-point-method is employed to obtain the optimal solution. Furthermore, to reduce the computational complexity, an asymptotic closed-form solution is derived by maximizing an upper bound of the objective function. Simulation results show that the average mutual information obtained by the asymptotic design is very closed to that obtained by the optimal design, while saving a huge computational complexity.


2021 ◽  
Author(s):  
Raysa Gevartosky ◽  
Humberto Fanelli Carvalho ◽  
Germano Costa-Neto ◽  
Osval A. Montesinos-Lopez ◽  
Jose Crossa ◽  
...  

Genomic prediction (GP) success is directly dependent on establishing a training population, where incorporating envirotyping data and correlated traits may increase the GP accuracy. Therefore, we aimed to design optimized training sets for multi-trait for multi-environment trials (MTMET). For that, we evaluated the predictive ability of five GP models using the genomic best linear unbiased predictor model (GBLUP) with additive + dominance effects (M1) as the baseline and then adding genotype by environment interaction (G × E) (M2), enviromic data (W) (M3), W+G × E (M4), and finally W+G × W (M5), where G × W denotes the genotype by enviromic interaction. Moreover, we considered single-trait multi-environment trials (STMET) and MTMET for three traits: grain yield (GY), plant height (PH), and ear height (EH), with two datasets and two cross-validation schemes. Afterward, we built two kernels for genotype by environment by trait interaction (GET) and genotype by enviromic by trait interaction (GWT) to apply genetic algorithms to select genotype:environment:trait combinations that represent 98% of the variation of the whole dataset and composed the optimized training set (OTS). Using OTS based on enviromic data, it was possible to increase the response to selection per amount invested by 142%. Consequently, our results suggested that genetic algorithms of optimization associated with genomic and enviromic data efficiently design optimized training sets for genomic prediction and improve the genetic gains per dollar invested.


Informatics ◽  
2019 ◽  
Vol 6 (2) ◽  
pp. 17
Author(s):  
Athanasios Davvetas ◽  
Iraklis A. Klampanos ◽  
Spiros Skiadopoulos ◽  
Vangelis Karkaletsis

Evidence transfer for clustering is a deep learning method that manipulates the latent representations of an autoencoder according to external categorical evidence with the effect of improving a clustering outcome. Evidence transfer’s application on clustering is designed to be robust when introduced with a low quality of evidence, while increasing the effectiveness of the clustering accuracy during relevant corresponding evidence. We interpret the effects of evidence transfer on the latent representation of an autoencoder by comparing our method to the information bottleneck method. Information bottleneck is an optimisation problem of finding the best tradeoff between maximising the mutual information of data representations and a task outcome while at the same time being effective in compressing the original data source. We posit that the evidence transfer method has essentially the same objective regarding the latent representations produced by an autoencoder. We verify our hypothesis using information theoretic metrics from feature selection in order to perform an empirical analysis over the information that is carried through the bottleneck of the latent space. We use the relevance metric to compare the overall mutual information between the latent representations and the ground truth labels before and after their incremental manipulation, as well as, to study the effects of evidence transfer regarding the significance of each latent feature.


2005 ◽  
Vol 60 (11-12) ◽  
pp. 865-866 ◽  
Author(s):  
Willi-Hans Steeb ◽  
Yorick Hardy ◽  
Ruedi Stoop

We apply genetic algorithms to find the value where the CHSH inequality is violated.


2020 ◽  
Vol 1 ◽  
pp. 646-660 ◽  
Author(s):  
Maximilian Stark ◽  
Linfang Wang ◽  
Gerhard Bauch ◽  
Richard D. Wesel

Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 974
Author(s):  
Taro Tezuka ◽  
Shizuma Namekawa

Task-nuisance decomposition describes why the information bottleneck loss I(z;x)−βI(z;y) is a suitable objective for supervised learning. The true category y is predicted for input x using latent variables z. When n is a nuisance independent from y, I(z;n) can be decreased by reducing I(z;x) since the latter upper bounds the former. We extend this framework by demonstrating that conditional mutual information I(z;x|y) provides an alternative upper bound for I(z;n). This bound is applicable even if z is not a sufficient representation of x, that is, I(z;y)≠I(x;y). We used mutual information neural estimation (MINE) to estimate I(z;x|y). Experiments demonstrated that I(z;x|y) is smaller than I(z;x) for layers closer to the input, matching the claim that the former is a tighter bound than the latter. Because of this difference, the information plane differs when I(z;x|y) is used instead of I(z;x).


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 389
Author(s):  
Sonali Parbhoo ◽  
Mario Wieser ◽  
Aleksander Wieczorek ◽  
Volker Roth

Estimating the effects of an intervention from high-dimensional observational data is a challenging problem due to the existence of confounding. The task is often further complicated in healthcare applications where a set of observations may be entirely missing for certain patients at test time, thereby prohibiting accurate inference. In this paper, we address this issue using an approach based on the information bottleneck to reason about the effects of interventions. To this end, we first train an information bottleneck to perform a low-dimensional compression of covariates by explicitly considering the relevance of information for treatment effects. As a second step, we subsequently use the compressed covariates to perform a transfer of relevant information to cases where data are missing during testing. In doing so, we can reliably and accurately estimate treatment effects even in the absence of a full set of covariate information at test time. Our results on two causal inference benchmarks and a real application for treating sepsis show that our method achieves state-of-the-art performance, without compromising interpretability.


Sign in / Sign up

Export Citation Format

Share Document