scholarly journals Multichannel Speech Enhancement in Vehicle Environment Based on Interchannel Attention Mechanism

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xueli Shen ◽  
Zhenxing Liang ◽  
Shiyin Li ◽  
Yanji Jiang

Speech enhancement in a vehicle environment remains a challenging task for the complex noise. The paper presents a feature extraction method that we use interchannel attention mechanism frame by frame for learning spatial features directly from the multichannel speech waveforms. The spatial features of the individual signals learned through the proposed method are provided as an input so that the two-stage BiLSTM network is trained to perform adaptive spatial filtering as time-domain filters spanning signal channels. The two-stage BiLSTM network is capable of local and global features extracting and reaches competitive results. Using scenarios and data based on car cockpit simulations, in contrast to other methods that extract the feature from multichannel data, the results show the proposed method has a significant performance in terms of all SDR, SI-SNR, PESQ, and STOI.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Peng Li ◽  
Qian Wang

In order to further mine the deep semantic information of the microbial text of public health emergencies, this paper proposes a multichannel microbial sentiment analysis model MCMF-A. Firstly, we use word2vec and fastText to generate word vectors in the feature vector embedding layer and fuse them with lexical and location feature vectors; secondly, we build a multichannel layer based on CNN and BiLSTM to extract local and global features of the microbial text; then we build an attention mechanism layer to extract the important semantic features of the microbial text; thirdly, we merge the multichannel output in the fusion layer and use soft; finally, the results are merged in the fusion layer, and a surtax function is used in the output layer for sentiment classification. The results show that the F1 value of the MCMF-A sentiment analysis model reaches 90.21%, which is 9.71% and 9.14% higher than the benchmark CNN and BiLSTM models, respectively. The constructed dataset is small in size, and the multimodal information such as images and speech has not been considered.


Author(s):  
Zengyan Hong ◽  
Xiangxiang Zeng ◽  
Leyi Wei ◽  
Xiangrong Liu

Abstract Motivation Identification of enhancer–promoter interactions (EPIs) is of great significance to human development. However, experimental methods to identify EPIs cost too much in terms of time, manpower and money. Therefore, more and more research efforts are focused on developing computational methods to solve this problem. Unfortunately, most existing computational methods require a variety of genomic data, which are not always available, especially for a new cell line. Therefore, it limits the large-scale practical application of methods. As an alternative, computational methods using sequences only have great genome-scale application prospects. Results In this article, we propose a new deep learning method, namely EPIVAN, that enables predicting long-range EPIs using only genomic sequences. To explore the key sequential characteristics, we first use pre-trained DNA vectors to encode enhancers and promoters; afterwards, we use one-dimensional convolution and gated recurrent unit to extract local and global features; lastly, attention mechanism is used to boost the contribution of key features, further improving the performance of EPIVAN. Benchmarking comparisons on six cell lines show that EPIVAN performs better than state-of-the-art predictors. Moreover, we build a general model, which has transfer ability and can be used to predict EPIs in various cell lines. Availability and implementation The source code and data are available at: https://github.com/hzy95/EPIVAN.


2017 ◽  
Author(s):  
L. Sánchez ◽  
N. Barreira ◽  
N. Sánchez ◽  
A. Mosquera ◽  
H. Pena-Verdeal ◽  
...  

2018 ◽  
Vol 63 (05) ◽  
pp. 1385-1403 ◽  
Author(s):  
KITAE SOHN ◽  
ILLOONG KWON

Trust was found to promote entrepreneurship in the US. We investigated whether this was true in a developing country, Indonesia. We failed to replicate this; this failure was true whether trust was estimated at the individual or community level or whether ordinary least squares (OLS) or two stage least squares (2SLS) was employed. We reconciled the difference between our results and those for the US by arguing that the weak enforcement of property rights in developing countries and the consequent hold-up problem make it more efficient for entrepreneurs to produce generic goods than relationship-specific goods—producing generic goods does not depend on trust.


2021 ◽  
Author(s):  
Kaibei Peng ◽  
Xiaoming Sun ◽  
Haowei Chen ◽  
Zhen He ◽  
Jianrong Wang

2020 ◽  
Vol 34 (4) ◽  
pp. 515-520
Author(s):  
Chen Zhang ◽  
Qingxu Li ◽  
Xue Cheng

The convolutional neural network (CNN) and long short-term memory (LSTM) network are adept at extracting local and global features, respectively. Both can achieve excellent classification effects. However, the CNN performs poorly in extracting the global contextual information of the text, while LSTM often overlooks the features hidden between words. For text sentiment classification, this paper combines the CNN with bidirectional LSTM (BiLSTM) into a parallel hybrid model called CNN_BiLSTM. Firstly, the CNN was adopted to extract the local features of the text quickly. Next, the BiLSTM was employed to obtain the global text features containing contextual semantics. After that, the features extracted by the two neural networks (NNs) were fused, and processed by Softmax classifier for text sentiment classification. To verify its performance, the CNN_BiLSTM was compared with single NNs like CNN and LSTM, as well as other deep learning (DL) NNs through experiments. The experimental results show that the proposed parallel hybrid model outperformed the contrastive methods in F1-score and accuracy. Therefore, our model can solve text sentiment classification tasks effectively, and boast better practical value than other NNs.


Sign in / Sign up

Export Citation Format

Share Document