scholarly journals Across-subjects classification of stimulus modality from human MEG high frequency activity

2017 ◽  
Author(s):  
Britta U. Westner ◽  
Sarang S. Dalal ◽  
Simon Hanslmayr ◽  
Tobias Staudigl

AbstractSingle-trial analyses have the potential to uncover meaningful brain dynamics that are obscured when averaging across trials. However, low signal-to-noise ratio (SNR) can impede the use of single-trial analyses and decoding methods. In this study, we investigate the applicability of a single-trial approach to decode stimulus modality from magnetoencephalography (MEG) high frequency activity. In order to classify the auditory versus visual presentation of words, we combine beamformer source reconstruction with the random forest classification method. To enable group level inference, the classification is embedded in an across-subjects framework.We show that single-trial gamma SNR allows for good classification performance (accuracy across subjects: 66.44 %). This implies that the characteristics of high frequency activity have a high consistency across trials and subjects. The random forest classifier assigned informational value to activity in both auditory and visual cortex with high spatial specificity. Across time, gamma power was most informative during stimulus presentation. Among all frequency bands, the 75-95 Hz band was the most informative frequency band in visual as well as in auditory areas. Especially in visual areas, a broad range of gamma frequencies (55-125 Hz) contributed to the successful classification.Thus, we demonstrate the feasibility of single-trial approaches for decoding the stimulus modality across subjects from high frequency activity and describe the discriminative gamma activity in time, frequency, and space.Author SummaryAveraging brain activity across trials is a powerful way to increase signal-to-noise ratio in MEG data. This approach, however, potentially obscures meaningful brain dynamics that unfold on the single-trial level. Single-trial analyses have been successfully applied to time domain or low frequency oscillatory activity; its application to MEG high frequency activity is hindered by the low amplitude of these signals. In the present study, we show that stimulus modality (visual versus auditory presentation of words) can successfully be decoded from single-trial MEG high frequency activity by combining source reconstruction with a random forest classification algorithm. This approach reveals patterns of activity above 75 Hz in both visual and auditory cortex, highlighting the importance of high frequency activity for the processing of domain-specific stimuli. Thereby, our results extend prior findings by revealing high-frequency activity in auditory cortex related to auditory word stimuli in MEG data. The adopted across-subjects framework furthermore suggests a high inter-individual consistency in the high frequency activity patterns.

2016 ◽  
Vol 146 ◽  
pp. 370-385 ◽  
Author(s):  
Adam Hedberg-Buenz ◽  
Mark A. Christopher ◽  
Carly J. Lewis ◽  
Kimberly A. Fernandes ◽  
Laura M. Dutca ◽  
...  

Geophysics ◽  
2021 ◽  
pp. 1-54
Author(s):  
Milad Bader ◽  
Robert G. Clapp ◽  
Biondo Biondi

Low-frequency data below 5 Hz are essential to the convergence of full-waveform inversion towards a useful solution. They help build the velocity model low wavenumbers and reduce the risk of cycle-skipping. In marine environments, low-frequency data are characterized by a low signal-to-noise ratio and can lead to erroneous models when inverted, especially if the noise contains coherent components. Often field data are high-pass filtered before any processing step, sacrificing weak but essential signal for full-waveform inversion. We propose to denoise the low-frequency data using prediction-error filters that we estimate from a high-frequency component with a high signal-to-noise ratio. The constructed filter captures the multi-dimensional spectrum of the high-frequency signal. We expand the filter's axes in the time-space domain to compress its spectrum towards the low frequencies and wavenumbers. The expanded filter becomes a predictor of the target low-frequency signal, and we incorporate it in a minimization scheme to attenuate noise. To account for data non-stationarity while retaining the simplicity of stationary filters, we divide the data into non-overlapping patches and linearly interpolate stationary filters at each data sample. We apply our method to synthetic stationary and non-stationary data, and we show it improves the full-waveform inversion results initialized at 2.5 Hz using the Marmousi model. We also demonstrate that the denoising attenuates non-stationary shear energy recorded by the vertical component of ocean-bottom nodes.


Author(s):  
Ayesha Behzad ◽  
Muneeb Aamir ◽  
Syed Ahmed Raza ◽  
Ansab Qaiser ◽  
Syeda Yuman Fatima ◽  
...  

Wheat is the basic staple food, largely grown, widely used and highly demanded. It is used in multiple food products which are served as fundamental constituent to human body. Various regional economies are partially or fully dependent upon wheat production. Estimation of wheat area is essential to predict its contribution in regional economy. This study presents a comparative analysis of optical and active imagery for estimation of area under wheat cultivation. Sentinel-1 data was downloaded in Ground Range Detection (GRD) format and applied the Random Forest Classification using Sentinel Application Platform (SNAP) tools. We obtained a Sentinel-2 image for the month of March and applied supervised classification in Erdas Imagine 14. The random forest classification results of Sentinel-1 show that the total area under investigation was 1089km2 which was further subdivided in three classes including wheat (551km2), built-up (450 km2) and the water body (89 km2). Supervised classification results of Sentinel-2 data show that the area under wheat crop was 510 km2, however the built-up and waterbody were 477 km2, 102 km2 respectively. The integrated map of Sentinel-1 and Sentinel-2 show that the area under wheat was 531 km2 and the other features including water body and the built-up area were 95 km2 and 463 km2 respectively. We applied a Kappa coefficient to Sentinel-2, Sentinel-1 and Integrated Maps and found an accuracy of 71%, 78% and 85% respectively. We found that remotely sensed algorithms of classifications are reliable for future predictions.


2018 ◽  
Vol 5 (2) ◽  
pp. 175-185
Author(s):  
Akhmad Syukron ◽  
Agus Subekti

                                         AbstrakPenilaian kredit telah menjadi salah satu cara utama bagi sebuah lembaga keuangan untuk menilai resiko kredit,  meningkatkan arus kas, mengurangi kemungkinan resiko dan membuat keputusan manajerial. Salah satu permasalahan yang dihadapai pada penilaian kredit yaitu adanya ketidakseimbangan distribusi dataset. Metode untuk mengatasi ketidakseimbangan kelas yaitu dengan metode resampling, seperti menggunakan Oversampling, undersampling dan hibrida yaitu dengan menggabungkan kedua pendekatan sampling. Metode yang diusulkan pada penelitian ini adalah penerapan metode Random Over-Under Sampling Random Forest untuk meningkatkan kinerja akurasi klasifikasi penilaian kredit pada dataset German Credit.  Hasil pengujian menunjukan bahwa klasifikasi tanpa melalui proses resampling menghasilkan kinerja akurasi rata-rata 70 % pada semua classifier. Metode Random Forest memiliki nilai akurasi yang lebih baik dibandingkan dengan beberapa metode lainnya dengan nilai akurasi sebesar 0,76 atau 76%. Sedangkan klasifikasi dengan penerapan metode Random Over-under sampling Random Forest  dapat meningkatkan kinerja akurasi sebesar 14,1% dengan nilai akurasi sebesar 0,901 atau 90,1 %. Hasil penelitian menunjukan bahwa penerapan  resampling dengan metode Random Over-Under Sampling pada algoritma Random Forest dapat meningkatkan kinerja akurasi secara efektif pada klasifikasi  tidak seimbang untuk penilaian kredit pada dataset German Credit. Kata kunci: Penilaian Kredit, Random Forest, Klasifikasi, ketidakseimbangan kelas, Random Over-Under Sampling                                                  AbstractCredit scoring has become one of the main ways for a financial institution to assess credit risk, improve cash flow, reduce the possibility of risk and make managerial decisions. One of the problems faced by credit scoring is the imbalance in the distribution of datasets. The method to overcome class imbalances is the resampling method, such as using Oversampling, undersampling and hybrids by combining both sampling approaches. The method proposed in this study is the application of the Random Over-Under Sampling Random Forest method to improve the accuracy of the credit scoring classification performance on German Credit dataset. The test results show that the classification without going through the resampling process results in an average accuracy performance of 70% for all classifiers. The Random Forest method has a better accuracy value compared to some other methods with an accuracy value of 0.76 or 76%. While classification by applying the Random Over-under sampling + Random Forest method can improve accuracy performance 14.1% with an accuracy value of 0.901 or 90.1%. The results showed that the application of resampling using Random Over-Under Sampling method in the Random Forest algorithm can improve accuracy performance effectively on an unbalanced classification for credit scoring on German Credit dataset. Keywords: Imbalance Class, Credit Scoring, Random Forest, Classification, Resampling


Sign in / Sign up

Export Citation Format

Share Document