scholarly journals Blind back-propagation method for fiber nonlinearity compensation with low computational complexity and high performance

2020 ◽  
Vol 28 (8) ◽  
pp. 11424
Author(s):  
Junhe Zhou ◽  
Yuheng Wang ◽  
Yunwang Zhang
2012 ◽  
Vol 20 (13) ◽  
pp. 14406 ◽  
Author(s):  
Guanjun Gao ◽  
Xi Chen ◽  
William Shieh

Author(s):  
Samer Alabed

In this work, we are interested in implementing, developing and evaluating multi-antenna techniques used for multi-user two-way wireless relay networks that provide a good tradeoff between the computational complexity and performance in terms of symbol error rate and achievable data rate. In particular, a variety of newly multi-antenna techniques is proposed and studied. Some techniques based on orthogonal projection enjoy low computational complexity. However, the performance penalty associated with them is high. Other techniques based on maximum likelihood strategy enjoy high performance, however, they suffer from very high computational complexity. The Other techniques based on randomization strategy provide a good trade-off between the computational complexity and performance where they enjoy low computational complexity with almost the same performance as compared to the techniques based on maximum likelihood strategy.


2021 ◽  
pp. 1-1
Author(s):  
Xiang Lin ◽  
Shenghang Luo ◽  
Sunish Kumar Orappanpara Soman ◽  
Octavia Dobre ◽  
Lutz Lampe ◽  
...  

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 223
Author(s):  
Yen-Ling Tai ◽  
Shin-Jhe Huang ◽  
Chien-Chang Chen ◽  
Henry Horng-Shing Lu

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi–Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi–Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi–Dirac correction function exhibits better capabilities of image augmentation and segmentation.


Sign in / Sign up

Export Citation Format

Share Document