Performance Improvement Through Error Fine Tuning In Interferometric SAR Signal Processing

Author(s):  
Zahid Hasan Bawar ◽  
Long Teng ◽  
Tao Zeng
2016 ◽  
Vol 27 (03) ◽  
pp. 219-236 ◽  
Author(s):  
Susan Scollie ◽  
Danielle Glista ◽  
Julie Seto ◽  
Andrea Dunn ◽  
Brittany Schuett ◽  
...  

Background: Although guidelines for fitting hearing aids for children are well developed and have strong basis in evidence, specific protocols for fitting and verifying technologies can supplement such guidelines. One such technology is frequency-lowering signal processing. Children require access to a broad bandwidth of speech to detect and use all phonemes including female /s/. When access through conventional amplification is not possible, the use of frequency-lowering signal processing may be considered as a means to overcome limitations. Fitting and verification protocols are needed to better define candidacy determination and options for assessing and fine tuning frequency-lowering signal processing for individuals. Purpose: This work aims to (1) describe a set of calibrated phonemes that can be used to characterize the variation in different brands of frequency-lowering processors in hearing aids and the verification with these signals and (2) determine whether verification with these signal are predictive of perceptual changes associated with changes in the strength of frequency-lowering signal processing. Finally, we aimed to develop a fitting protocol for use in pediatric clinical practice. Study Sample: Study 1 used a sample of six hearing aids spanning four types of frequency lowering algorithms for an electroacoustic evaluation. Study 2 included 21 adults who had hearing loss (mean age 66 yr). Data Collection and Analysis: Simulated fricatives were designed to mimic the level and frequency shape of female fricatives extracted from two sources of speech. These signals were used to verify the frequency-lowering effects of four distinct types of frequency-lowering signal processors available in commercial hearing aids, and verification measures were compared to extracted fricatives made in a reference system. In a second study, the simulated fricatives were used within a probe microphone measurement system to verify a wide range of frequency compression settings in a commercial hearing aid, and 27 adult listeners were tested at each setting. The relation between the hearing aid verification measures and the listener’s ability to detect and discriminate between fricatives was examined. Results: Verification measures made with the simulated fricatives agreed to within 4 dB, on average, and tended to mimic the frequency response shape of fricatives presented in a running speech context. Some processors showed a greater aided response level for fricatives in running speech than fricatives presented in isolation. Results with listeners indicated that verified settings that provided a positive sensation level of /s/ and that maximized the frequency difference between /s/ and /∫/ tended to have the best performance. Conclusions: Frequency-lowering signal processors have measureable effects on the high-frequency fricative content of speech, particularly female /s/. It is possible to measure these effects either with a simple strategy that presents an isolated simulated fricative and measures the aided frequency response or with a more complex system that extracts fricatives from running speech. For some processors, a more accurate result may be achieved with a running speech system. In listeners, the aided frequency location and sensation level of fricatives may be helpful in predicting whether a specific hearing aid fitting, with or without frequency-lowering, will support access to the fricatives of speech.


Author(s):  
Balasaheb S. Dahifale ◽  
Anand S. Patil

The detailed investigation of flow behavior inside the combustion chamber and performance of engine is most challenging problem due to constraints in Experimental Data collection during testing; However, Experimental testing is essential for establishment of correlation with CFD Predictions. Hence, the baseline engine was tested at different load conditions and validated with CFD results, before it was optimized for performance improvement. The objective of the CFD Prediction was not only to optimize performance (Fuel Efficiency, Power, Torque, etc.) & Emissions Reduction, but also to assess feasibility of Performance Upgrade Potential. In the present CFD study, surface mesh and domain was prepared for the flame face, intake valve, intake valve seat, exhaust valve, exhaust valve seat and liner for closed volume cycle, between IVC and EVO using CFD code VECTIS. Finally simulations for three different load conditions were conducted using VECTIS solver. Initially, in-cylinder pressure vis a vis crank angle prediction was carried out for 100%, 75% and 50% load conditions. Then the fine tuning of (P-ϴ) diagram for different load conditions was conducted by varying different combustion parameters. Further, the engine performance validation was carried out for rated and part load conditions in terms of, IMEP, BMEP, break specific fuel consumption and power output, while NOx mass fractions were used to convert the NOx to g/kWh for comparison of emission levels with the test data. Finally optimized re-entrant combustion chamber and modified valve timing with optimum fuel injection system simulation was carried out to achieve target performance with reduced fuel consumption. A 3D CFD result showed reduction in BSFC and was in close agreement with the test data.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Alessandra Lumini ◽  
Loris Nanni ◽  
Gianluca Maguolo

In this paper, we present a study about an automated system for monitoring underwater ecosystems. The system here proposed is based on the fusion of different deep learning methods. We study how to create an ensemble based of different Convolutional Neural Network (CNN) models, fine-tuned on several datasets with the aim of exploiting their diversity. The aim of our study is to experiment the possibility of fine-tuning CNNs for underwater imagery analysis, the opportunity of using different datasets for pre-training models, the possibility to design an ensemble using the same architecture with small variations in the training procedure.Our experiments, performed on 5 well-known datasets (3 plankton and 2 coral datasets) show that the combination of such different CNN models in a heterogeneous ensemble grants a substantial performance improvement with respect to other state-of-the-art approaches in all the tested problems. One of the main contributions of this work is a wide experimental evaluation of famous CNN architectures to report the performance of both the single CNN and the ensemble of CNNs in different problems. Moreover, we show how to create an ensemble which improves the performance of the best single model. The MATLAB source code is freely link provided in title page.


Author(s):  
SANTI P. MAITY ◽  
MALAY K. KUNDU

This paper investigates the scope of wavelets for performance improvement in spread spectrum image watermarking. Performance of a digital image watermarking algorithm, in general, is determined by the visual invisibility of the hidden data (imperceptibility), reliability in the detection of the hidden information after various common and deliberate signal processing operations (robustness) applied on the watermarked signals and the amount of data to be hidden (payload) without affecting the imperceptibility and robustness properties. In this paper, we propose a few spread spectrum (SS) image watermarking schemes using discrete wavelet transform (DWT), biorthogonal DWT and M-band wavelets coupled with various modulation, multiplexing and signaling techniques. The performance of the watermarking methods are also reported along with the relative merits and demerits.


2019 ◽  
Vol 9 (12) ◽  
pp. 2413
Author(s):  
Chang-Uk Baek ◽  
Ji-Won Jung

Faster-than-Nyquist (FTN) signal processing, which transmits signals faster than the Nyquist rate, is a representative method for improving throughput efficiency sacrificed performance degradation due to inter-symbol interference. To overcome this problem, this paper proposed FTN signal processing based on the unequal error probability to improve performance. The unequal error probability method divides encoded bits into groups according to priority, and FTN interference rates are differently applied to each group. A lower FTN interference ratio is allocated to the group to which high-priority encoded bits belong and a higher FTN interference ratio is allocated to the group to which low-priority encoded bits belong, thus performance improvement can be obtained compared to the conventional FTN method, with the same interference ratio. In addition, we applied the proposed FTN signal processing, based on the unequal error probability method, to the OFDM (orthogonal frequency division multiplexing) system in multipath channel environments. In the simulations, the performance of the proposed method was better than that of the conventional FTN method by about 0.2 dB to 0.3 dB, with an interference ratio of 20%, 30%, and 40%. In addition, in multipath channels, we confirmed that by applying the proposed unequal error probability, the OFDM-FTN method improves performance to a larger extent than the conventional OFDM-FTN method.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2706
Author(s):  
Incheon Paik ◽  
Jun-Wei Wang

Code generation, as a very hot application area of deep learning models for text, consists of two different fields: code-to-code and text-to-code. A recent approach, GraphCodeBERT uses code graph, which is called data flow, and showed good performance improvement. The base model architecture of it is bidirectional encoder representations from transformers (BERT), which uses the encoder part of a transformer. On the other hand, generative pre-trained transformer (GPT)—another multiple transformer architecture—uses the decoder part and shows great performance in the multilayer perceptron model. In this study, we investigate the improvement of code graphs with several variances on GPT-2 to refer to the abstract semantic tree used to collect the features of variables in the code. Here, we mainly focus on GPT-2 with additional features of code graphs that allow the model to learn the effect of the data stream. The experimental phase is divided into two parts: fine-tuning of the existing GPT-2 model, and pre-training from scratch using code data. When we pre-train a new model from scratch, the model produces an outperformed result compared with using the code graph with enough data.


2013 ◽  
Vol 11 ◽  
pp. 95-100
Author(s):  
S. Kiefhaber ◽  
M. Rosenbaum ◽  
W. Sauer-Greff ◽  
R. Urbansky

Abstract. In this contribution a coherent relation between the algebraic and the transform-based reconstruction technique for computed tomography is introduced using the mathematical means of two-dimensional signal processing. There are two advantages arising from that approach. First, the algebraic reconstruction technique can now be used efficiently regarding memory usage without considerations concerning the handling of large sparse matrices. Second, the relation grants a more intuitive understanding as to the convergence characteristics of the iterative method. Besides the gain in theoretical insight these advantages offer new possibilities for application-specific fine tuning of reconstruction techniques.


Sign in / Sign up

Export Citation Format

Share Document