scholarly journals ATARI: A Graph Convolutional Neural Network Approach for Performance Prediction in Next-Generation WLANs

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4321
Author(s):  
Paola Soto ◽  
Miguel Camelo ◽  
Kevin Mets ◽  
Francesc Wilhelmi ◽  
David Góez ◽  
...  

IEEE 802.11 (Wi-Fi) is one of the technologies that provides high performance with a high density of connected devices to support emerging demanding services, such as virtual and augmented reality. However, in highly dense deployments, Wi-Fi performance is severely affected by interference. This problem is even worse in new standards, such as 802.11n/ac, where new features such as Channel Bonding (CB) are introduced to increase network capacity but at the cost of using wider spectrum channels. Finding the best channel assignment in dense deployments under dynamic environments with CB is challenging, given its combinatorial nature. Therefore, the use of analytical or system models to predict Wi-Fi performance after potential changes (e.g., dynamic channel selection with CB, and the deployment of new devices) are not suitable, due to either low accuracy or high computational cost. This paper presents a novel, data-driven approach to speed up this process, using a Graph Neural Network (GNN) model that exploits the information carried in the deployment’s topology and the intricate wireless interactions to predict Wi-Fi performance with high accuracy. The evaluation results show that preserving the graph structure in the learning process obtains a 64% increase versus a naive approach, and around 55% compared to other Machine Learning (ML) approaches when using all training features.

2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Qiang Lan ◽  
Zelong Wang ◽  
Mei Wen ◽  
Chunyuan Zhang ◽  
Yijie Wang

Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.


2018 ◽  
Vol 15 (2) ◽  
pp. 294-301
Author(s):  
Reddy Sreenivasulu ◽  
Chalamalasetti SrinivasaRao

Drilling is a hole making process on machine components at the time of assembly work, which are identify everywhere. In precise applications, quality and accuracy play a wide role. Nowadays’ industries suffer due to the cost incurred during deburring, especially in precise assemblies such as aerospace/aircraft body structures, marine works and automobile industries. Burrs produced during drilling causes dimensional errors, jamming of parts and misalignment. Therefore, deburring operation after drilling is often required. Now, reducing burr size is a serious topic. In this study experiments are conducted by choosing various input parameters selected from previous researchers. The effect of alteration of drill geometry on thrust force and burr size of drilled hole was investigated by the Taguchi design of experiments and found an optimum combination of the most significant input parameters from ANOVA to get optimum reduction in terms of burr size by design expert software. Drill thrust influences more on burr size. The clearance angle of the drill bit causes variation in thrust. The burr height is observed in this study.  These output results are compared with the neural network software @easy NN plus. Finally, it is concluded that by increasing the number of nodes the computational cost increases and the error in nueral network decreases. Good agreement was shown between the predictive model results and the experimental responses.  


Geophysics ◽  
2021 ◽  
pp. 1-77
Author(s):  
Hanchen Wang ◽  
Tariq Alkhalifah

The ample size of time-lapse data often requires significant event detection and source location efforts, especially in areas like shale gas exploration regions where a large number of micro-seismic events are often recorded. In many cases, the real-time monitoring and locating of these events are essential to production decisions. Conventional methods face considerable drawbacks. For example, traveltime-based methods require traveltime picking of often noisy data, while migration and waveform inversion methods require expensive wavefield solutions and event detection. Both tasks require some human intervention, and this becomes a big problem when too many sources need to be located, which is common in micro-seismic monitoring. Machine learning has recently been used to identify micro-seismic events or locate their sources once they are identified and picked. We propose to use a novel artificial neural network framework to directly map seismic data, without any event picking or detection, to their potential source locations. We train two convolutional neural networks on labeled synthetic acoustic data containing simulated micro-seismic events to fulfill such requirements. One convolutional neural network, which has a global average pooling layer to reduce the computational cost while maintaining high-performance levels, aims to classify the number of events in the data. The other network predicts the source locations and other source features such as the source peak frequencies and amplitudes. To reduce the size of the input data to the network, we correlate the recorded traces with a central reference trace to allow the network to focus on the curvature of the input data near the zero-lag region. We train the networks to handle single, multi, and no event segments extracted from the data. Tests on a simple vertical varying model and a more realistic Otway field model demonstrate the approach's versatility and potential.


2020 ◽  
Author(s):  
Sebastian Jensen ◽  
Eric Hillebrand ◽  
Mikkel Bennedsen

<p>Exploiting a national-level panel of per capita CO2 emissions and GDP data, we investigate the GDP-CO2 relationship, using a data-driven approach. We conduct an in-sample analysis in which we investigate the shape of the GDP-CO2 relationship. Utilizing the shape of the GDP-CO2 relationship learned, we project CO2 emissions through 2100, using the same set of GDP and population growth scenarios as used by the Intergovernmental Panel of Climate Change (IPCC) for their sixth assessment report due for release in 2021-22. Our analysis is carried out at two levels: at a global, and at the level of five large regions of the world. We consider a semiparametric model specification which places no restrictions on the functional relationship between GDP and CO2, but which allows for country and time specific fixed effects. The nonparametric component of our model is specified as a feedforward neural network, ensuring universal approximation capabilities, theoretically. In a simulation study, we show that our model is able to capture various complex relationships in finite samples of realistic sizes.</p>


2017 ◽  
Vol 14 (2) ◽  
Author(s):  
Maximilian Miller ◽  
Chengsheng Zhu ◽  
Yana Bromberg

AbstractWith the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Cheol Woo Park ◽  
Mordechai Kornbluth ◽  
Jonathan Vandermause ◽  
Chris Wolverton ◽  
Boris Kozinsky ◽  
...  

AbstractRecently, machine learning (ML) has been used to address the computational cost that has been limiting ab initio molecular dynamics (AIMD). Here, we present GNNFF, a graph neural network framework to directly predict atomic forces from automatically extracted features of the local atomic environment that are translationally-invariant, but rotationally-covariant to the coordinate of the atoms. We demonstrate that GNNFF not only achieves high performance in terms of force prediction accuracy and computational speed on various materials systems, but also accurately predicts the forces of a large MD system after being trained on forces obtained from a smaller system. Finally, we use our framework to perform an MD simulation of Li7P3S11, a superionic conductor, and show that resulting Li diffusion coefficient is within 14% of that obtained directly from AIMD. The high performance exhibited by GNNFF can be easily generalized to study atomistic level dynamics of other material systems.


Author(s):  
Vladimir Keremet ◽  
Yakov Karandashev ◽  
Aleksey Kuzovkov ◽  
Georgy Teplov

The paper discusses the issue of the applicability of neural networks to the problems of designing microelectronics. The integration of neural network modules into the elements of specialized EDA systems can significantly speed up the modeling processes at different stages of design. The application of a multilayer convolutional architecture of a neural network of the UNET type to the problem of direct and inverse computational photolithography is considered. Using this neural network approach, we were able to speed up the computation of a photo mask for a 90nm process technology by two orders of magnitude and achieve simulation accuracy that surpasses standard inverse photolithography (ILT) methods.


2021 ◽  
Author(s):  
Vishwas Verma ◽  
Kiran Manoharan ◽  
Jaydeep Basani

Abstract Numerical simulation of gas turbine combustors requires resolving a broad spectrum of length and time scales for accurate flow field and emission predictions. Reynold’s Averaged Navier Stokes (RANS) approach can generate solutions in few hours; however, it fails to produce accurate predictions for turbulent reacting flow field seen in general combustors. On the other hand, the Large Eddy Simulation (LES) approach can overcome this challenge, but it requires orders of magnitude higher computational cost. This limits designers to use the LES approach in combustor development cycles and prohibits them from using the same in numerical optimization. The current work tries to build an alternate approach using a data-driven method to generate fast and consistent results. In this work, deep learning (DL) dense neural network framework is used to improve the RANS solution accuracy using LES data as truth data. A supervised regression learning multilayer perceptron (MLP) neural network engine is developed. The machine learning (ML) engine developed in the present study can compute data with LES accuracy in 95% lesser computational time than performing LES simulations. The output of the ML engine shows good agreement with the trend of LES, which is entirely different from RANS, and to a reasonable extent, captures magnitudes of actual flow variables. However, it is recommended that the ML engine be trained using broad design space and physical laws along with a purely data-driven approach for better generalization.


SPE Journal ◽  
2013 ◽  
Vol 18 (02) ◽  
pp. 378-388 ◽  
Author(s):  
Kjetil B. Haugen ◽  
Bret L. Beckner

Summary Phase-equilibrium calculations can be a time-consuming part of process simulators and compositional reservoir simulations. Various authors have presented encouraging speed improvements based on reduced methods that can lower the computational cost by reducing the number of independent variables and thus generating a smaller system of equations to solve. This paper presents a careful comparison of conventional and reduced algorithms, showing that they can be expressed as linear transformations of each other. Consequently, the two sets of algorithms exhibit identical convergence behavior, and the performance gain of the reduced methods is entirely caused by reducing the cost of linear-algebra operations. Performance benchmarks show much smaller speed-up numbers than seen in previously published material. Highly optimized linear-algebra operations significantly limit the opportunity for further speed improvement from reduced methods. Only a marginal speed-up potential is observed for mixtures with 15 components or less. This suggests that reduced methods may be less attractive for reservoir simulation than previously thought.


Sign in / Sign up

Export Citation Format

Share Document